But that doesn’t really help our cause of needing to use dynamic PowerShell…so I’m going to ignore it and use this more “verbose” way instead!

Like we said, easy enough. We are basically…

changing the json into a object,

we’re selecting and expanding the Database property,

selecting and expanding the View property, and

finally selecting and expanding the IsYAVowel property to get it’s value.

Is Y a vowel? False

What we cannot do is either of the following:

Bottom result is a great example showing PowerShell expects a single property.

So what are we supposed to do when we are asked to select the value from a property when we are given the path like these:

“Database.View.IsYAVowel“, or

“Database.Table.Fruit“, or

“Database.DatabaseName“, or even

“ServerName“?

That is where a Dynamic way of PowerShell can come in handy!

Splitting the string…

… seems simple enough.

Richard Siddaway ( blog | twitter ) had a great post recently about “Variable Squeezing” that we are going to use here to show splitting the strings and assigning them to a variable.

Note the backslash ( \ ) before the dot ( . ) because…regex

Now that we have an ordered list of properties we can place them into our string.

Formatted for your viewing pleasure!

You know that it won’t be as simple as this but we’re going to try to call that directly and see what happens.

$Stmt
& $Stmt

Has that been treated as a single command?

How can we get PowerShell to treat that as a Script Block?

Well it turns out that the answer is pretty damn simple (once you know what to look for).

PowerShell, like other languages, has types and one of those types is [scriptblock] which has the lovely method called Create(). If we look at the Overload Definition (by running the method without the accompanying brackets) we can see that all it needs is a string so let’s pass our string script in!

We had a really chaotic environment with self hosted customers trying to manage a big internal application that they should have hosted with us and instead decided to “save money” by skipping the more expensive hosted costs.

Unfortunately what was communicated over and over again hosting yourself meant the cost was discounted because of the additional manpower you would be throwing behind the app, which in 95% of cases, never happened.

Early on the company had allowed each customer to request customizations not only to the usual suspects like reports, but to core business logic and functionality sometimes without involving the development team.

There were various levels of quality to these effective forks of the database, and internal teams weren’t the only source, the customer often times the client site would have a helpful person who would slowly become the Accidental DBA, but that same person might also be pressed into making helpful changes to the queries, indexes, or or whatever and often times under time pressure might forget to work with our team to manage the excellent change management process we had engineered.

So, when it came time to upgrade their product, we needed a backup of their database to verify that we even captured the changes as they diverged so much. We tested on each customer’s current version of reality (and in the order of a few thousand stored procedures) for each upgrade. Woo enterprise software!

Most of our customers pledged to create a test environment, and in some cases they handled it well and there were no issues. In most cases we had significant delays, and in some cases we dealt with a level of mismanagement of resources that traipsed into negligence by customer IT teams and database resourcves.

With the goal in mind of YEARLY upgrades, we had to get creative to be able to simply get the customer’s data to present a faithful change script.

So what did we always pretty much have?
* SA on a SQL box, and an active connection.
* Management sign off that they didn’t care as long as we didn’t break anything.
* We oftentimes didn’t even have PowerShell 3.0 on the client machines, though we did on the SQL Servers.

What did we need?
* All their data and schema, as quickly as possible without direct access to anything else and with the minimum time blocking their work.

We didn’t have the luxury of database backups, and the client machines we were allowed to use were often incredibly locked down, but the SQL Server’s were at the same time amazingly misconfigured and could do all sorts of awful things, often times including … accessing the internet directly.

So it came down to sanctioned data exfiltration because of poorly manged IT teams and CEOs who wanted it done and weren’t so concerned about how as much as now.

Connect – dump PowerShell scripts through pipeline of indeterminate size (hence chunking and base64ing) rebuild scriptblocks (including a base64 encoded version of Schemazen and bcp.exe heh) – build queries which just return data to bulk copy down (since most of their data was often stored in one giant documents table which we never needed), use schemazen to script out their current schema state, and then compress that all and either put it on a file share, post it over a secure channel, or even convert it to binary so it can be inserted and selected out 😐

Really most of it I could have used xp_cmdshell for directly, but it has string limits and my code got large when I started including modules (imagine putting all of dbatools in an xp_cmdshell call) so chunking and executing on the target machine had its benefits.

I also oftentimes had terribly slow links for the client machines, hence the local bcp dumping of relevant data.