Just another WordPress.com site

PowerShell

Post navigation

Many of our production DBs are restored nightly onto a development server which is a great way to test your backups and to have production data available to developers. In order to do this “Prod on Dev” though you have to recreate any DB objects you made yesterday because they got stomped in the refresh. You could make one massive script for all the objects and run it first thing every morning but if you are creating many objects maintenance can become complex. A semi-automated solution is to put each object in its own file and use SQLCMD to run each file either manually or using the Windows Task Scheduler. Better but any time you add an object you have to edit the control script. How about putting each object in a file in the same folder and using PowerShell to get the file names and run each one using SQLCMD? The PowerShell script below does just that. Schedule it to run from Windows Task Scheduler and you are ready to refresh as needed.

################################################# Get a list of sql files in a folder and run# them in alphabetical order using SQLCMD.################################################

As you can see, the code is quite simple but let’s take a closer look and consider some potential enhancements.

In #Setup we are clearing the screen and setting variables for future use. These values could easily be input parameters or the whole thing could be turned into a function.

In the #Get file names to run section you retrieve a list of files in the source folder and sort them alphabetically. If they need to be run in a certain order (such as tables before procedures) you can add an index to the file name (010_MyTable.sql). If the files are under source control, say TFS, you could use PowerShell to run TF.exe to get the latest version of the files. Running from the command line in PowerShell is very straightforward as you will see in the next section.

Finally #Run SQLCMD for each file is the code that loops through each file and runs it using the SQLCMD utility. You could easily add logging and notifications if you have the need.

As you can see, you can automate tedious tasks quickly and easily using PowerShell and SQLCMD.

We’ve been hearing for a long time that PowerShell (PS) is the wave of the future for SQL Server administration. Under the covers SSMS itself is powered by PS so there ought to be benefits but how do you get started. Everybody has seen how you can right-click a node in the SSMS Object Explorer and jump right in to PS.

But then what?

Let’s consider a small administrative task. Fred creates a bunch of jobs without setting the job owner so he owns them all. He decides to seek greener pastures and when his account is dropped the jobs quit working.

I know you all have (or should have) a script for finding and changing the owner of certain SQL Agent jobs on a SQL instance. They all use a cursor (or something like it) to query msdb.dbo.sysjobs and step through the results, executing msdb.dbo.sp_Update_Job where appropriate. This certainly works but it’s kind of a kludge. Wouldn’t it be more intuitive to say something like “Look at all the jobs, where ‘Fred’ is the owner then change the owner to ‘sa’.”?

That’s really all you do in PS and you can find examples from Mr. Google. However, PowerShell’s ability to use shortcuts and abbreviations often makes figuring out what’s really happening like making sense of alphabet soup.

Here are two snippets of code. The first lists all the jobs on an instance with their owners and the second finds all the jobs with a certain owner and changes that owner to another. Let’s unpack each and see what’s going on.

dir | FT Name, OwnerLoginName –auto

“dir” is an alias for “Get-ChildItem” (Also abbreviated “GCI”). It functions like the “dir” command at the DOS prompt – it looks at the current location and returns everything “below”. The important difference is that it doesn’t return a string about the objects, it returns the objects themselves. Keep that in mind as we go on.

“|” The pipe character represents the PS pipeline which hands the object(s) to the next command. In our case we are passing all the objects returned by Get-ChildItem to the next command. See this link for more information on the pipeline.

“FT” is short for “Format-Table” and is used to format output. The comma separated arguments “Name” and “OwnerLoginName” tell FT which columns to return and the “-auto” automatically prettifies the output for human consumption.

Follow the above screenshots and paste this line of code into the PowerShell window and hit “Enter” to see it for yourself.

“dir|” functions as above but remember the actual objects are being passed, not just a property value (name, owner, etc.). In this case we don’t need all the jobs, just the ones Fred owns.

“?” is an alias for the “Where-Objects” commandlet which filters for the objects you want. The curly brackets (“{}“) contain the script block where you put the filter logic. “$_.” refers to each object coming through the pipe and “OwnerLoginName” is the attribute examined as it passes through the filter. “-eq” is the equality operator (=) while ‘“Fred”‘ is the string it is trying to match.

After this the objects that met the filter criteria are piped to the next command. Again we have an alias, “%“. This is short for “ForEach-Object” and allows you to iterate the objects and change properties or execute methods. In this case we will set (“=“) the “OwnerLoginName” value to the string “sa”. The “Alter()” is a method call that updates any changes. Finally the “Name” prints out the name of each job changed.

As you can see, PowerShell uses an intuitive and object oriented approach to working with SQL Server objects though sometimes shortcuts and aliases can make it a little hard for beginners to interpret. It’s well worth the time to take a careful look at any scripts you download so you know exactly what’s going on. PowerShell has a very nice help system that allows you to do this research. Look here if you’d like some help on Help.

I am moving a number of SSIS configuration files from one folder structure to another as part of a SQL Server upgrade. Both folder structures have each of the files but they point to different file drop locations, SQL Instances, etc. My task was to compare the configuration settings in each environment and map all the changes from the old environment to the new.

Rather than opening each file and visually checking for changes I went with a PowerShell approach centered on the Compare-Object cmdlet. This cmdlet compares two objects and produces a comparison report. This will work whether the two objects are folder structures, text files, or other objects.

My approach was to start with two folders containing .dtsconfig files. First compare the two sets of files and return a list of files that are in both folders. Then, file pair by file pair, open each file and compare contents. Write any differences to an output file. See the comments below for more details.

# Write the two paths to the outfile so you know what you’re looking at“Differences in files of the same name between the following folders:” | Out-File $OutFile -append $Path1 + ” AND ” + $Path2 | Out-File $OutFile-append“——————————————–“ | Out-File $OutFile-append

As you can see, PowerShell provides significant functionality with very little effort. I was particularly impressed with the comparison report. The SideIndicator column (see below) is a simple way to communicate the differences. This is a typical output file (the names were changed to protect the innocent).

While it isn’t a fancy GUI it is functional, visual, extensible, and built in. The extensible is important because you don’t get that from a GUI. Suppose I wanted to keep a “Gold Copy” of my config files and run a daily comparison against production to pick up changes? The code above is almost custom written for that process – all it needs is an email step. Further, you could use a similar process using SMO to generate database schema and do you own comparisons. The possibilities are truly endless.

Our team was tasked with putting scanned expense report receipts (.PDF) in a SQL Server table for review. As the DBA I thought my part was pretty straight forward. I knocked out the table and associated objects including the stored procedures to service the table and was enjoying a cup of coffee when I realized I had not tested my insert procedure. Then it dawned on me – I had no way to test it in SSMS. How can you use a text based tool to manipulate binary files into the database? If you’ve figured that one out I’d love to see it. If you haven’t then perhaps my PowerShell approach will be of some use to you.

First let’s create some basic SQL objects, a table and a stored procedure:

CREATETABLE TAPTEST(Col1 VARBINARY(MAX)NULL)

GO

CREATEPROCEDURE usp_TAPTEST @Col1 VARBINARY (MAX)AS

INSERTINTO dbo.TAPTEST(Col1)VALUES (@Col1)

GO

Here is the PowerShell code if you are in a hurry but it’s worth going through line by line as we will do below.

#Get the file

[Byte[]]$file=get-content-EncodingBytec:\Temp\TAPTEST.pdf

#Connect to DB

$DB=“server=MyInstanceName;integrated security=sspi;database=Bob”

$conn=new-objectSystem.Data.SqlClient.SqlConnection($DB)

#Build the command and parameters

$cmd=new-objectSystem.Data.SqlClient.SqlCommand(“usp_TAPTEST”,$conn)

$cmd.CommandType= [System.Data.CommandType]‘StoredProcedure‘

$cmd.Parameters.Add(“@Col1”, [System.Data.SqlDbType]‘VarBinary‘)

$cmd.Parameters[“@Col1”].Size =–1

$cmd.Parameters[“@Col1”].Value =$file

#Execute the command

$conn.Open()

$cmd.ExecuteNonQuery()

Taking a closer look.

#Get the file

[Byte[]]$file=get-content-EncodingBytec:\Temp\TAPTEST.pdf

The square brackets force the data type for the $file variable. Additionally, since Get-Content normally reads the file as text the –Encoding parameter is used to tell it to read it as binary.

Connecting to the database is just as you would do for any other SMO connection, nothing special here.

#Connect to DB

$DB=“server=MyInstanceName;integrated security=sspi;database=Bob”

$conn=new-objectSystem.Data.SqlClient.SqlConnection($DB)

Building the command and parameter objects was a little more interesting.

#Build the command and parameters

$cmd=new-objectSystem.Data.SqlClient.SqlCommand(“usp_TAPTEST”,$conn)

Note the command text above: “usp_TAPTEST“ which will be executed at the server. Any valid DML should execute, including the following. “usp_TAPTEST @Col1 = 1″ which works great as long as you have a printable data type. If it weren’t for reading the file into a Byte data type in the first line we’d be stuck again. So how DO you tell it this is a binary object? Read on.

In the first line below we tell PowerShell to treat the CommandText as a stored procedure. Then we set the first parameter’s data type to varbinary. Setting the size parameter to -1 tells it that the size is MAX Finally, set the value of the parameter to the file you read in initially.

$cmd.CommandType= [System.Data.CommandType]‘StoredProcedure‘

$cmd.Parameters.Add(“@Col1”, [System.Data.SqlDbType]‘VarBinary‘)

$cmd.Parameters[“@Col1”].Size =–1

$cmd.Parameters[“@Col1”].Value =$file

At this point you are ready to go, just open the connection and execute.

#Execute the command

$conn.Open()

$cmd.ExecuteNonQuery()

And voila!

Notice the French lingo – this is a high class blog!

As you can see, the code is pretty simple with the only tricky part having to do with fitting the binary nature of the document with the parameter used by the SQL Command object.

Hopefully you will find this useful both as stand-alone code and as another step in the drive for PowerShell mastery.

I’ve been telling myself to learn PowerShell for quite awhile but I’ve never been able to get over the learning curve. What I needed was a piece of work that was bigger than trivial and smaller than huge that would force me into PowerShell. It came along this week in the form of a request to script all the views in a test database before the refresh from production and then “putting them back” into the new test database. Doing this from SSMS is pretty easy but I sure don’t want to do it every time we refresh. I needed something repeatable, simple, and parameter driven.

I knew PS would be relatively simple to manage so I looked into using PS with SQL Server Management Objects (SMO). First I searched the Web for something like “PowerShell SMO Script Objects” and found this, which pointed me in the right direction.

The complete script is at the bottom of the article but I’ve broken it up into sections which I’ll cover in detail. My focus is those of us learning PS so if you are a guru please be patient with the rest of us.

Figure 1

All the user configurable parameters can be set in Figure 1 above. In line 13 note that $ObjType will be inserted as a literal in the code so PowerShell will read MyObject.$ObjType but execute MyObject.views. This allows you to script tables, views, stored procedures, and other Db objects by simply changing the parameters. For even more flexibility you could read in parameters from a text file.

Also note in line 15 that the “@” indicates a list of string objects that will be excluded from the script. You could also add other schemas if you wish.

Finally, the If block (lines 18-21) tests for the output file and deletes it if it exists. Without this check successive runs of the script would append the same scripts to the file.

Figure 2

Next, let’s look at the code to create the SMO we’ll use to script our objects. Line 25 tells PowerShell which assembly we want to use so it can create the objects we need. Lines 28 and 29 create the Server and Databases collection objects respectively. Note that the Server object takes the instance name on creation.

Once you have the DB collection available you are almost ready to start scripting. The Script method is available for each object (table, view, etc.) but there are a number of options you can use to control the scripts generated. For example, if you want to test for existence before creating an object you would set the IncludeIfNotExists = 1. To use the option you will create a ScriptingOptions object (line 32) and set various properties as needed. A complete list of options is here.

If you want to perform multiple scripting tasks, say Drop and then Create you can perform the scripting multiple times, appending the results as we see in the following two code blocks.

Figure 3

First you set the ScriptingOptions as in Figure 3 for the operation you want to perform (lines 38 & 50). Then you iterate through every object of the specified type in the database using a foreach loop (lines 41 & 53). Each time the foreach starts it filters out the object if the schema is any one of the values passed in from the list discussed above.

Without the whitespace, the whole PowerShell script takes up only about 20 lines. As you can see, PowerShell and SMO together make a very powerful tool to put in your toolbox. I’d be interested to hear about how you modified this code to meet your own needs.