Migrating SSIS Packages to SSIS Azure part Two – Automating the Deployment

Hello!

If you’ve read and followed through my previous post, you will have World Wide Importers Integration Services project running in SSIS Azure. It’s all very interesting, go and have a read.

One thing that is missing form that guide, the documentation, and SSIS in general, is how to automate SSIS Deployments. In the WWI SSIS project, there are connection managers that we had to manually update the values of to get it to work post-deploy. This is exactly the opposite of what we want to do.

Back when SQL Server 2012 was known as Denali, one of the new and exciting features released was that all properties of a connection manager in a SSIS project were parameterised. This even happened automatically when they got deployed to a SSIS Server. Great! The feature was also written about some time before 2012 was released.

However I’ve got an issue with this. What you get is not project parameters filling in the values at deploy time, but rather the values hard coded in. So it’s useful that they’ve broken down the connection string into it’s constituent parts, but it’s still not really parameterised.

Configure The Project

So how do we parameterise connection managers? Using WWI SSIS as an example, let’s parameterise part of the connection string stored in those projects.

Make sure you have Visual Studio 2017 installed with SSDT 2017 Integration Services support so that you can import the project from an ispac that you deployed to the server. Import the ispac by going to new- project, and then selecting the Integration Services ImportProject Wizard -

Browse to the ispac and import.

now you’ll have the project in Visual Studio.

Open up the dtsx project that uses the connection managers –

Right click on the object and you’ll see an option called “parameterise” – if you’ve ever parameterised something in SSIS then this will be familiar to you.

From the drop down you can see that you can either parameterise the entire connection string, or just part of it. I’m going to do ServerName. And I’m going to repeat the process for the other connection manager.

And so now I’ve got two project parameters in the project.params file –

We also need to change the connection managers to SQL Authentication. Enter any credentials it doesn’t matter as we will be overwriting these when we deploy.

Now deploy to Azure -

Once this is done I now have project parameters that fill in the value of the server name. If I create an environment and fill in the values there, then I can use environment variables to change the values, depending on what environment I am using. I also need to create a reference between environment and project -

And I’ve created an environment which the parameters point to to get the value of the ServerName.

Note that the value of each environment variable is now the Azure Instance, not the default instance which is in the project params file (so this is what would be deployed and used by default, had I not set up an environment.)

I need to repeat this for UserName and Password.

Run this using the environment….

And we have a successful run.

Automate With AssistDeploy

I’ve written about AssistDeploy at great length elsewhere, however to sum up, the configuration of the environment and how it maps to parameters is stored in a json file. Now that we’ve got the SSIS project working, we can extract the information out into the json file that is used by AssistDeploy to automate the deployment process.

Confused? OK, let’s clarify with a demo. AssistDeploy is available on PowerShellGallery, so you can run Install-Module to download it. You can also use Nuget, or download it from GitHub. So download it whichever way you want onto the machine you were using in part one.

Now that we have the module, we can use the “Get-SsisAsJson” function to connect to the instance we deployed to, extract the parameter and environment information and store as json.

This is the output….

…and this is the json. Viewing the json file shows the mapping of the project parameters to the environment variables we created:

You’ll notice that the passwords are null. This is definitely a good thing. If you are following at home fill this in with the value of your password. Don’t worry! I’ll show you in a bit how to not have to hardcode passwords into the json.

I’m going to delete everything off my local server and re-deploy, only to verify that all the mappings are correct. But it’s worth noting that AssistDeploy functions are idempotent – that is, they only change what needs changing.

Before:

And here is the logging.

Of course, the problem here is that we are deploying with the values of the parameters exactly with what is in the json file. So we’re still no closer to parameterising them through an automated deployment. Except that we are. Lines 7 and 12 in the gist above includes a switch called LocalVariables. If we took that out and re-ran the script it would fail.

which for those of you watching closely will notice is the name of the environment variables with the values we want them to be. In an automated deployment process, this is simple stuff. This means that values are stored only in memory, so sensitive things like passwords are not an issue.

I'll create some variables at this point with the values, remove the localVariables and re-run.

In the logs above it states that it has found matching environment variables but with different values and will update these. It would have been easier to drop and recreate all variables, but it seemed a bit pointless to make changes when they weren't required.