For the final installment of this series, we’re going to take everything we’ve learned and put it all together.

Premise

In case you haven’t been paying attention, I really don’t like pointing and clicking my way through a GUI to do, well, just about anything. For any task that needs to be done more that once, I’d much rather spend the time upfront automating it, in order to save time later. Besides, automating tasks is fun! So it only makes sense that I’d want to use the same approach to auditing.

In SQLAudit 201, we created a SQLAudit database to act as our central audit repository. That’s where we’re storing the info on our current audits, and importing data from our audit files into a table for permanent storage and reporting.

The other thing we’re going to use that SQLAudit database for is to hold our “master” database audit specifications. These “master” specs will be used as templates to deploy auditing to other databases/instances. How, you ask? By using PowerShell, of course!

How it works

I’ve split the work into 2 scripts: one to deploy an audit, one to remove an audit. Both can be executed interactively or in batch mode.

If executed interactively, the script will prompt the user for target instance and database, login and password. It will connect to master instance and list available audit specs in the SQLAudit database and any server audit specs starting with “master”. The user is then asked to specify an audit to deploy. The script verifies that the user specified a valid master audit specification and determines whether the selected spec is databsse or server audit specification.

The script then connects to target instance, and determines if the target audit object exists. If it doesn’t, the audit object is created and a record is inserted into the SQLAudit.dbo.AuditLocator table. If the audit does exist, the user is prompted to overwrite definition. This is useful if you’ve modified the location or file options in the master audit object and you want to propagate those changes. If the path to the audit file has been changed, the AuditLocator table will be updated accordingly.

If a database audit specification was selected, the script will connect to the target database and check for a pre-existing spec. If one is found, the user is asked if the apec should be overwritten. If yes, the specification will be dropped and recreated. (There didn’t seem to be any sense to writing a separate function to update an audit specification.) Otherwise, the new specification is created. The same general logic is used for a server audit specification.

RemoveAudit.ps1

Parameters

Instance Name – name of the instance you want to stop auditing

Database Name – name of database being audited

Audit Specification Name – name of audit specification to remove.

Target Login

Target Password

If run interactively, the user is prompted for the target instance and database, login and password. The script connects to target instance and lists all existing server audit specs and any database audit specs in specified database. It then prompts the user for the audit spec to remove. The script verifies the selected spec name is valid, and, if so, deletes that specification. If that was the last audit specification associated with that audit object, the audit object is also dropped and the record in AuditLocator is updated to mark the audit as inactive.

Name: Out-DataTable.ps1
Author: Chad Miller
Formats input into a datatable object which can then be imported into SQL Server using Write-DataTable.

Sample Deployment

Let’s say we want to audit DDL (a common theme for me). Start by creating a Master_DDLAudit audit object on whatever instance holds the SQLAudit database. Use a file output, specifying a UNC path to our central audit folder. This is where all our audits will write to, so make sure it’s accessible by all your potential target instances. We’re not actually auditing with this master audit, so leave it disabled.

Next create an audit specification, Master_Database_DDLAudit, in the SQLAudit database. Specify the SCHEMA_OBJECT_CHANGE_GROUP action, and assign the spec to the Master_DDLAudit audit. Again, leave the specification disabled.

Suppose we also want to create a security audit to monitor changes to logins and server-level permissions. These are instance-level events, so we need to create a server audit specification, rather than a database audit spec. I don’t want this output going to the same file as my DDL audit, so I’ll create a new audit object, Master_SecurityAudit. Then I’ll create a server audit spec called Master_Server_SecurityAudit.

Now if I want to deploy the DDL audit to my AdventureWorks2012 database, I simply run the DeployAudit script.

If I check my AuditLocator table, I see the audit has been added as an active audit.

To stop auditing DDL on AdventureWorks, I use the RemoveAudit script.

Since this was the only audit specification attached to this audit, a select on my AuditLocator shows that the audit is now marked as inactive. The audit object has also been removed from the instance.

Conclusion

That concludes this series on SQL Audit. I hope you’ve enjoyed it as much as I have and I really hope you’ve found it helpful for your own auditing needs. Please don’t hesitate to contact me with any questions or problems you run into with the scripts.

I have a question about he master_admin an $master_admin_password variables. Can they be omitted? I was looking though the Library-Management scripts and see them used in some of the connection strings. It’d be nice to be able to use windows auth for running the script

All I have to say is thank you thank you thank you. This is a solution that is worth a lot of money and you are giving it away.

Blessings.

I’ve made a few changes to the tables and the stored procedure. I added a column to AuditStage and AuditRecord that is for identiy so I can have an auto incrementing column. Then I made that the primary key for AuditRecord table. I’m thinking about adding some indexes for performance. Is there anything that you recommend? I was considering audit_name, event_time, action_id, server_pricipal_name and class_type.

In the stored procedure I added code to update the event_time in the staging table to local time before it inserts it into the final records table.

Hi Michael – Sorry it’s taken so long for me to reply to your comments, but I’m glad you got the Windows Auth working and I’m very glad you’re finding the solution useful. And thanks for the suggested modifications! One of these days I’ll have to go back and make updates/improvements to the original code based on lessons learned and feedback from people like you.