Scripting DBA Actions

Scripting DBA Actions

Last Thursday morning @ the PASS Summit I participated in a panel discussion on automating DBA tasks. It was a great discussion and I was humbled by the other panel members. During the discussion I jotted down a few notes that I didn’t get to mention during the talk. The talk was recorded and will likely be available on Quest’s site in the next few days or so.

The first distinction to draw is the difference between scripting and automating tasks. Scripting tasks simply means there is some programmatic representation of the task you want to perform. The program can be in any language you choose: Python, PowerShell, C#, Perl, T-SQL, SSIS, etc. The script does all of the necessary work. Automating tasks simply means the script is automatically initiated by some action. The action could be a SQL Server Agent job, a Windows Scheduler task, an Extended Event (X-Event), a server/database trigger, a Service Broker event, etc. In other words, instead of the script being manually executed in an interactive way it’s automatically executed in an unattended manner.

During the discussion I made the statement that everything should be scripted but only the right things should be automated. This is one of those high-level statements that has a lot of room for interpretation. But that’s the point. You shouldn’t think about scripting something or not. If it’s a task you do more than once (in a day, week, hour, month, quarter, etc) you should script it. If it’s a task that others needs to also perform you should script it. But you should think if it should be automated or not. Only the right actions should be automated. Remember, you probably aren’t going to be there when something goes wrong. Also, if the task requires reasoning that’s not easy to capture in a script you don’t want to automate it. In any event I wanted to capture some best practices for scripting.

Most of the scripts I come across aren’t very good. Sure they perform the necessary task but that’s about it; venture a little off the expected path and everything goes wrong. The script author simply took the T-SQL script and stuck it in a BAT file that calls SQLCMD. Here are some bullet points (in no particular order) that you should consider when creating your scripts.

Header information: create a standard template for the header of your scripts. It should contain some simple information like:

Title

Description (what it does)

Execution Permissions: what permissions does the script need on the targets it runs against. The goal here is to identify the lowest level of permissions needed for the script to run successfully.

Inputs

Outputs

A simple change history: date changed, who changed it, what was changed

Variables: don’t hard code anything in your script. If you’re using a constant define it up front and use the variable in the body of the script. Don’t use any hardcoded values in your script. When I’m writing a script I force myself do to this up front rather than remembering to come back later. Doing this allows me to easily turn a constant into an input parameter.

Single vs. Multiple Targets: make sure your script can run against multiple targets. For example, you may write a script that runs against a single table. Shouldn’t it really be designed to run against multiple tables? The single table case will automatically be handled.

Error handling: what does your script do when it hits an unexpected case? If you have a lot of scripts it’s worth the time to come up with standard error handling routines that you use in all of your scripts.

Auditing: what standard information should the script output. In some situations it may be necessary to keep a log of exactly what the script did. Where should it store this information: a table, a text file, an XML output file?

Debug Switch: This is related to auditing but when the script is run in debug mode it outputs a greater amount of detail – that you need to debug problems but don’t need to see in normal operations.

Re-entrant: In some situations this is hard to do. The script should always check for pre-conditions (what is the expected/necessary state of the targets the script runs against). If the pre-conditions aren’t met the script should gracefully fail. What happens if the script is run 5 times in a row? What happens if the script hits an error, the error is corrected and script is rerun? You’re scripts need to handle these cases; they need to be re-entrant. This may mean it may need to perform a little clean-up at the beginning of the script – residue left over from a previous run.

Naming conventions: your scripts have a file name, variable names, method names, etc. Establish naming conventions so you can easily find the script you’re looking for (I like to use noun-verb or verb-noun – to follow PowerShell) and easily troubleshoot your scripts. Don’t use something like x1 as a variable name.

Location of your scripts: think about storing all your scripts in a standard location. I recently blogged about Windows Live Mesh. A folder that you meshify is a good candidate for your scripts. This will allow you to keep the scripts synchronized between several machines, a copy will be kept up on the web so you can get to it from just about anywhere, and you can share your scripts with other people – as long as they have a Windows Live Id. I’d also save a copy on a USB drive so I always have them in hand. Sometimes I need to do something on a server that can’t access the internet. So long as the USB ports haven’t been disabled I’ll be able to get to my scripts.

It’s very easy to ignore these items and end up with a bunch of .bat or .ps1 files that you have no idea what they do. If you spend a little time up front it’ll pay dividends. Your co-workers will come to you to borrow your scripts or for help on their own scripts. You’ll be know as the Script Guru! And many great things will come your way.

Okay, maybe those last two points are a bit over the top. In all seriousness though, we’re professionals and we should treat our scripts to a professional touch.