Trying to prove that Skynet should be running on PowerShell!

Post navigation

Some background…
We’ve been buying our groceries online for a few years now. We find it super convenient and it saves us a lot of time. I even created a PowerShell module for it some time ago!

There is one (very minor) annoyance with it though, remembering to use the discount coupons you get after you’ve bought groceries for a certain amount. These coupons or codes get’s sent out before your current order has been delivered which means that you cant add them for your next order (can’t reach checkout while you have an active order waiting for delivery).

This means I have to wait for my order to be delivered and then add to it at the checkout step for my next order, at which point I’ve forgot all about it and maybe even deleted/archived the e-mail containing the pdf-file with the coupon.

I thought of this as the perfect scenario to check out a (relatively) new service from Microsoft called Flow, the idea behind Flow is to make it simple to automate things without the need of writing any code, but that doesn’t mean you can’t do that as well 🙂

How to achieve this?When building automation I usually try to write down the steps needed to achieve the “end-to-end automation”. In this case that would be:

Make sure the e-mails containing the coupons can be found automatically

Get the coupon from the e-mail moved somewhere where it can be accessed by a PowerShell runbook in Azure Automation

Create a PowerShell function that can parse pdf-files so the code inside can be retrieved

Create another PowerShell function that can post the code to the online grocery store

Profit! 🙂

These steps have now been achieved, and here’s how I did it:

Fetching the E-mail and the attachments (Step 1 and 2)
This is amazingly simple using Microsoft Flow. After you’ve signed up and logged in, just go to “My Flows” and click “Create from template”. There are quite a few to pick from so the easiest way to achieve this is to use the search function at the top of page, since I’m using Outlook.com as my personal e-mail provider, and thought the simplest way to store the attachments was using blob storage, I simply searched for “outlook blob” and found these templates I could use:

In my case, the first one fits perfectly so that’s the one I chose as a starting point. Click on it, pick “choose this template” and first connect your Azure storage account (needs to be created in advance):

Then connect your e-mail account by logging in:

If everything worked, you can go on and press “Continue”

You’ll then arrive at the page where you can configure the different steps in your flow, and if you want to, add some conditions. After you’ve clicked “edit” on both steps and updated them they should look something like this:

As you can see, I changed the folder this flow should look in from “Inbox” to “Flow” to prevent it from harvesting all the attachments I receive. I can then simply add a mail rule to put the e-mails I want in that folder.

Same thing for the “Create file”-step, “mailattachments” should correspond to a container on your storage account.

That’s it for parsing the e-mails. If you would like to, you could also add a http request after these steps to trigger the runbook automatically (webhook) as soon as a new attachment has been saved to the blob storage, but in this case, I’ll just schedule that to run at a regular intervall.

Parsing the pdf-file and posting the discount code (step 3, 4 and 5!)
To be able to get text out of the pdf-file I used the iTextSharp library. Then wrap that up in a PowerShell function, which in it’s simplest form might look something like this:

Time to schedule that in Azure Automation, and make sure all the modules needed are available for it when it runs! (I run this on a hybrid worker)

Conclusion
While I have had a few issues with Microsoft Flow along the way (it is still in preview after all), it seems like a really cool service. And since you can make a http request to a webhook in Azure Automation, and/or just integrate them through some other service like the blob storage example in this post, the possibilities are pretty much endless.

I’ve always enjoyed using PowerShell for things outside of the usual “automating sysadmin tasks” scope, and some time ago I thought it would be cool to use it to create some “branded” wallpapers for my desktop.

Maybe you have a huge number of wallpapers that you have collected over the years, but you’d like to place the PowerShell logo on them to make them sooo much cooler?

Or maybe you just have a bunch of pictures you’d like to watermark to show their origin?

Or maybe you just like to know a few things that you can do with the System.Drawing-namespace and PowerShell.

If any of those are true for you, you might want to try out a function I’ve created called Add-Logo! 🙂

I mostly created it because it was fun to see if it was possible, and it fulfilled it’s purpose a long time ago so I never got to iron out all the bugs (which probably are there), but I thought it could be fun to share it anyway.

(I should probably add that there are some obvious copyright-related things you need consider before doing this depending on what images you pick and how you want to use the results)

What does it do then?
Well, at first it just placed one picture on another picture. I thought this was pretty neat since I could use Get-ChildItem to pipe picture files to it and place the logo on all of them pretty fast, but there was a problem…

I soon realized that every background picture was so different that the logo didn’t look good at the same place on all of them. I started out placing it in the bottom right corner, but some pictures had “too much going on” so the logo was hard to notice, or the contrast was to low (especially text based logos). To solve this I added a parameter for “logo placement”, so it was possible to pick any of the four corners on the background for the logo. I piped the picture files to the function four times (one for each corner) and got a huge collection of “branded wallpapers” where usually at least one out of four looked OK. On top if this, there was an issue with different picture resolutions, on some pictures the logo was huge, and on some it was really tiny.

This was obviously not going to be good enough, so I added some code that would do the following steps:

Sample the pixels of the logo to check what color span it has, the size etc.

Sample the pixels of the background where the logo could be placed

Do some “analysis” of this and place the logo as good as possible

Add parameters for controlling the position of the logo, minimum contrast, relative logo size etc. to the function to make it more dynamic

I’ve had close to zero experience with image processing so this is obviously not anywhere near a professional grade software for doing this kind of thing, but being able to use PowerShell for working with pictures is still pretty cool IMHO 🙂

Some usage example follows:

First begin with a picture that can act as a logo, I took the liberty of using the PowerShell logo (registered trademark of Microsoft):

and this picture of a few clouds (should go hand in hand 🙂 ):

I personally think that logotypes usually look better where the background is “solid”, more so than where the contrast is the greatest, so the default settings of the parameters is to have a preference towards that (as long as the minimum allowed contrast level is achieved).

Since the contrast value is met (controlled by the “MinimumContrast” parameter) and the color is pretty solid it ended up in the top left corner.

Let’s do another one, using the same PowerShell logo but on this picture:

The command is now run with Verbose output which will show you a bit more information about why the function picked a certain corner:

Results:

This might seem a bit odd at first. The top left corner seems to be a better placement for the logo, and it’s contrast is certainly greater as you can see from the verbose output, but the problem here is that the logo doesn’t really fit in the black area and therefore the function sampled some of the “blue/earth pixels” as well since that’s where parts of the logo would have ended up.

Let’s try running the same command again, but this time with the “ProportionFactor” parameter, which enables scaling of the logotype (higher value makes the logo smaller):

Results:

Now the top left corner had both the greatest contrast and the most solid color since the logo actually fits in the black area!

As you can see, the function usually needs some tuning from the standard values depending on the logo you are using (play around with MinimumContrast and MaxColorSpan!), but once you’re happy with a few test pictures, you can usually just pipe all the images you want to it and you’ll have a bunch of “branded” desktop wallpapers to use!

The function also includes other parameters, for example how close to the edge you want to place the logo or if you just want to place it in the same corner on all of the pictures.

Summary
While the task of manipulating images can be solved in a lot of other (better/more efficient) ways, I think it’s still pretty cool that you can use PowerShell to achieve something like this. So while the code itself in this function isn’t really something to write home about, the fact that PowerShell can be used for such a variety of tasks kind of is!

So, there are a lot of PowerShell “scripting games” around recently, which is great. But are they really games? 😉

Let’s do something a bit different, let’s measure how fast we can type out cmdlet names!

Tab completion cannot be used, and the input will be compared in a case sensitive way (yes, I know, this is by no means a measure on how good your PowerShell skills are, like, at all. But hey, it’s just a game 🙂 ), the code for the function that can measure this follows (download link):

But very recently a public preview of a new version of the module was released where the authentication part has been changed to use ADAL instead, which seems to fix this issue!

This is how you can test it yourself:

First, you need to uninstall any previous version of the module you might have. If you can, go ahead and uninstall the Microsoft Online Services Sign-In Assistant as well to make sure the new module works as expected (the dependency on this service is now removed).

Then go to the download page for the preview version of module, download it and install it. The installation procedure is very simple:

Click next…

Read the license terms, and check the box if you agree. Click next again…

Choose an installation path (this actually not where the module currently ends up though, just the EULA-file…).

Click Install to begin the installation, and confirm the UAC-prompt if you get one.

The installation runs…

And finally, just click Finish and the module is installed.

Now open a PowerShell prompt, and run the following commands:
Import-Module MSOnline
Get-Module MSOnline | Format-List

In the property “Path”, you’ll see where the module was installed, in my case it was “C:\WINDOWS\system32\WindowsPowerShell\v1.0\Modules\”:

Go to that folder and zip the folders MSOnline and MSOnlineExtended, easiest way is probably to right-click on the folder, choose “Send to”, and then “Compressed (zipped) folder”, do this for both (one at a time). Should look something like this:

It will prompt you to place the zipped files on your desktop instead of the current folder, which is a good idea, so click “Yes” 🙂

You can now import those zip-files into Azure Automation. I recommend that you do this in the classical portal (I’ve had some issues when importing modules in the preview portal).

First, go to your automation account, then go to assets, and then click “Import Module” at the bottom:

Browse to your zipped module and click open:

Click complete in lower right corner:

Repeat for both modules. You can follow the process at the bottom of the page:

When everything is done, you should be able to use the module in Azure Automation. A simple native PowerShell script runbook that just lists some users would look like this:

Can you automate anything with Azure Automation?
While there are some limitations on what it can and cannot do, I thought I could have a bit of fun using some of the fairly new features in Azure Automation to show that even though the main purpose of this service is to automate management tasks in the cloud (and in your local datacenter using hybrid workers), since it’s built on PowerShell, there really isn’t that much you cannot do.

I’ve been trying to come up with a scenario that is a bit of fun, and at the same time shows how you can use features like hybrid workers and webhooks to overcome almost any obstacles you have when automating something that spans over different services and locations.

This is what I came up with:Let’s say I’m on my way home from work, it’s autumn and the rain is pouring down. I feel tired, cold, and just want to come home and grab a nice cup of coffee. To detect if something is running out of resources (in this case me), and to trigger something that can fix it (in this case, coffee) is a pretty common scenario in IT.

And to simulate that a process like this might span over multiple services where some are in the cloud (in this case iCloud, twitter and a weather service), while others are in your local datacenter (or in this case the coffee brewer in my kitchen), we are going to run parts of the code in Azure Automation and parts of it on a hybrid worker, everything orchestrated with Azure Automation.

The steps involved, at a high level, will be:

Fetch the location of my phone through iCloud

Use that information to fetch the weather data at that location, and check if it’s going to rain

If rain was detected, start with sending out a tweet asking if I would like some coffee, if the reply is positive, brew some coffee.

If no reply to the tweet is detected, it will send out an e-mail with a link to a page where the coffee brewer can be started with a button (that calls a runbook through a webhook)

So let’s get started!

So, where am I? And how’s the weather?
I don’t work at the same place everyday, so I don’t want to hard code the location where the weather is checked. Wouldn’t it be great if I could just fetch this information dynamically somehow? Well, with Azure Automation and some PowerShell, I can.

Since I carry around a smartphone with a GPS all day long, I thought that would be a good source for location details, and since you can fetch your location information through iCloud when you have an iPhone, this was the method I chose to do it.

Disclaimer: This code is for educational purposes ONLY, I do not take any responsibility if you use this outside of the ToS for the different services utilized here.

So I started with creating a PowerShell-function that could fetch my phones location through iCloud, if you want to take a look at it, it’s available here.

We then need to fetch some weatherdata at that location, luckily, I’ve already built a function like that before, blog post available here.

So, I have the tools to fetch my current location and the weather at that location. But how do to use this in Azure Automation?

Importing custom modules in Azure Automation
This is actually really simple! You can import almost any PowerShell module into Azure Automation, as long as you zip it up in folder with the same name as your module file. So I took my two functions above and put them into a WebUtilities.psm1-file. I then put that file into a WebUtilities-folder, and finally zipped it all up as “WebUtilities.zip”. If you want to learn more about how to create integration modules for Azure Automation, including creating an optional file containing information about a Azure Automation connection-variable, more information about that is available here.

We then need to import this into Azure Automation. The screenshots that follows are from the “classic portal”, but you can do this in the preview portal as well:

First find the automation account you want to use, go to assets, and then click “Import Module” at the bottom:

Browse to your zip-file and click open to select it and press “Complete” down in the right corner:

Azure Automation will then begin to import the module and extract the activities it contains, you can follow the process at the bottom of the page:

These functions are now available in our PowerShell Workflows and PowerShell runbooks. Neat huh?

(The custom modules you import will not, at the time I’m writing this, be pushed to your hybrid runbook workers automatically. The Azure Automation team is working on that though, so it will happen eventually. In the meantime, you need to do this yourself.)

Writing the code…
It is now time to use the functions and actually write the code needed to tie everything together. There are many cool new features regarding Azure Automation but one of my favorites are the PowerShell ISE AddOn the Azure Automation team is working on, if you work with Azure Automation I can’t recommend you to check out the GitHub repository for it enough, and ever since I did the build straight from the source it has been working pretty well considering it’s still a very early release.

This is how my setup looks (ISESteroids, another great product, is also used here):

In addition to enabling you to use all of the features of the PowerShell ISE (and ISESteroids if you use that), this AddOn enables you to for example; fetch your runbooks straight from Azure, upload changes, run the code locally with emulated activities, test the code in Azure, and manage your assets so they are available when you test the code locally.

The productivity boost you get from this in comparison to the text authoring and testing experience in the portal, at least in my experience, is huge. So go ahead and try it out!

So, back to the code itself. As stated above, the steps involved here will be:

Fetch the location of my phone through iCloud

Use that information to fetch the weather data at that location, and check if it’s going to rain

If rain was detected, start with sending out a tweet asking if I would like some coffee, if the reply is positive, brew some coffee.

If no reply to the tweet is detected, it will send out an e-mail with a link to a page where the coffee brewer can be started with a button (that calls a runbook through a webhook)

And since the PowerShell community is so awesome, this is a pretty common scenario aswell, you build a few functions of your own, and you find some from others. Just zip it up and import it in the same way as the above functions. To use the MyTwitter-module, you also need to add API keys, just follow Adam’s instructions and you’ll be fine!

If you have read some posts at this blog before, you probably know that I enjoy creating home automation scripts quite a lot, and I’ve named this little project Jarvis after the famous AI, the ‘JarvisGroup’ specified above (Start-AzureAutomationRunbook cmdlet) is the hybrid worker group that runs some of these scripts. If you want to learn more about hybrid runbook workers and how to deploy them, check out this link.

Currently, you can’t use webhooks to trigger runbooks on a hybrid worker, as a workaround, I have another runbook that uses the Start-AzureAutomationRunbook cmdlet to trigger it on the hybrid worker instead, the code of that looks like this:

The module containing the Connect-TelldusLive and Set-TDDevice cmdlets are installed on the target hybrid worker since that’s where it will execute (and as stated above, the module won’t be pushed out to hybrid workers automatically from Azure Automation even if you have imported them there, but that will be fixed in the future).

So, we’re all set now…

But, does it all work?
Well, you’d obviously have to come by for coffee some time to see this for yourself, but yes, it actually does! 🙂

The code for that form with the token masked (be aware that posting a form like this on a public website without authentication is a MAJOR security risk depending on the runbook type, it’s only for demo purposes in this case):

Summary
I hope this post have helped you to see how flexible Azure Automation actually is. PowerShell is truly versatile and a great “glue-language” to tie different services together. Even though using Azure for turning on a coffee brewer might be a bit overkill, if it’s possible to integrate a weather service, an iPhone, e-mail, twitter and a coffee brewer using it, it can probably manage your IT environment aswell, don’t you think? 🙂

Only administrators can connect through PowerShell remoting (WinRM) with the default configuration, and if you are running a version older than Windows 8/Server 2012 you wont have the “Remote Management Users” local group to add non-admins to if you want to give them access to PowerShell remoting (WinRM).

You can configure the access list of the endpoint(s) using “Set-PSSessionConfiguration -Name Microsoft.PowerShell -ShowSecurityDescriptorUI”, but it only runs locally and if you don’t want to build the SDDLs yourself the only alternative is to use the UI enabled by the switch in that example.

So I put together a function to enable you to simply pass an account (user or group) by name, and if you want to run it remotely, a computer name. It is really simple to use. It looks like this in action:

What I want to address with this post is the process of obtaining the public key and thumbprint of the certificate used for encryption. A lot of examples I’ve seen are following the basic concept of retrieving the certificate from the local server where the mof will be deployed, but that requires firewall openings to all servers and credentials to them, and I think this might be a better/simpler alternative to that, at least in some cases.

So I’ve written a function (link at the bottom of this post) that gets the information needed straight from a Microsoft Certificate Authority (aka Active Directory Certificate Services) instead of all the different servers, which I think simplifies the process a bit.

I’ve also added some other properties to the returned objects to make it possible to use this advanced function for monitoring expiring certificates.

I’ll give you some examples on how to use this function below!

I’d also like to point out that I found a lot of parts of this code on the internet, I’ve just added a few extra things to it and wrapped it in an advanced function. I’m not sure who is the original author of this code though, if anyone knows, please add a comment below so I can give credit where credit is due! Thanks to whoever you are! 🙂

So, the process itself is pretty straight forward, specify your CA instance and what certificates you are interested in and the function will return them for you. You could for example do this:

When scripting against Active Directory I usually specify a domain controller for the “-Server” parameter of the AD cmdlets to prevent potential issues with replication.

For example, say you are creating a new group, and then want to change the ACLs of that group, for example the “WriteMembers”-permission. You probably want to specify the same domain controller on these two requests to make sure the newly created group is actually available when changing the ACL.

But hard coding things are usually not a good idea, and if that DC happens to go offline while a script is running, a lot of requests might fail. So what I did was to create a function that checks if the specified DC is online, and if it isn’t, it retrieves a lists of all the DCs that exists in the same site as the server where the script is executing, and picks the next available one after verifying it works.

In this case, MyDC01.MyDomain.local was offline and didn’t work, so the function instead returned MyDC02.MyDomain.local which has been verified by issuing a AD-query to it. It is simply returned as a string, so to use it in a script you could do something like this (with some errorhandling added):

If you put this first in the script, you’ll know that the DC used will be online when the script starts, if you want to, you could of course run this function again within in a catch-statement to be able to “failover” to another DC during script execution.

When installing a new SMA (Service Management Automation) runbook worker or web service it might fail with the following error message in the log:
“Product: System Center 2012 R2 Service Management Automation Runbook Worker — Unable to communicate with SQL Server using database information provided.”

If you are doing a manual installation using the wizard it will look like this:

Not sure if this matters, but in my case, the database is hosted in a SQL AlwaysOn Availability Group on a non-default port (not 1433), and we are using “Windows Authentication”, or a “trusted” connection to log into the database.

After investigating this issue and looking at the network communication I realized that the installation actually tries to validate the connection on the database-settings page, but when it’s finally time to start the installation, it just fails right away. Also, I found that the connection at the “verify sql settings”-step is established via a service (svchost.exe or CcmExec.exe), which could explain why this workaround actually works (it’s probably using the same component in the OS).

I finally found a workaround for this issue though, which is pretty weird, but it got me through the installations of all my runbook workers and web services so I thought I’d share it if anyone else is experiencing this issue.

Workaround using temporary ODBC-connection
We will not actually create the connection, just fill in enough information to be able to do a test.

Fill in all the settings in the SMA Runbook Worker-wizard but do not click “Install” at the last page.

Instead, start the “ODBC Data Sources (64-bit)” (%windir%\system32\odbcad32.exe) using the same account as your installation wizard is running with and click “Add…”, see below:

Then click “Finish”:

Fill in the details of your database for SMA (the first two fields can be anything):

Fill in the name of your sql server, click next, and choose “Client Configuration” if you are using a non-default port and fill in the one you are using:

Click next, and choose to change default database to master (not 100% sure this is needed, but a thread @technet suggested this), like this:

Press “Finish” at the next step, but instead of pressing “OK” you choose “Test Data Source…” and you should see a successful test:

Immediately switch back to your SMA Runbook Worker wizard and press Install, it should now go through fine!

When the installation has finished, go back to your “ODBC connection test” and choose OK, then Cancel three times to exit the wizard for creating a ODBC-connection without actually creating it.

So, we have created our Connect-OnlinePizza function and now have access to parts of the site that are only available when logged in. But how?

Remember the Invoke-WebRequest-cmdlet in the last post?
We specified a session variable in the Global scope, and that variable contains cookies and data to keep our session with the site consistent over multiple webrequests, and that’s what we’ll use in our next function, Get-MyOnlinePizzaAccountInfo.

Get-MyOnlinePizzaAccountInfo
First of all, we need to find what page holds the information we want. In this case, the page containing the account information was located at http://onlinepizza.se/?view=andraKonto (it requires you to be logged in).

Make sure you ran the “Connect-OnlinePizza”-function first, that way the “$OnlinePizzaSession”-variable will be available and make it possible for us to reach this page and see the details of our account.

To fetch the page and load it into a variable you could do this (we save it to file because of the issue with the encoding name, see part 1 of this guide):

I’m by no means an expert in string manipulation or regex, so there is probably a better way of doing this, but I usually use the Split-operator to get the part I want. In this case we need to split the string after value=” and before “/> (or remove it). We also need to fetch this particular line from the sites html code.

As you can see, we get two tokens back, and we need the second one. This can easily be done by putting everything in another pair of parentheses and then just specify which one we want. Since the first one will be identified as 0, and the one we want 1, we will end up with this:

To get rid of that last part, we could either use the "replace"-operator or do another split. In this case, the "replace"-operator might be the better choice, but in my experience the split-operator will provide a more robust and consistent result. The site might change and add something else after "/> on the same line, or there might be some white space that you didn't see, so let's just do another split, wrap that up in a new set of parentheses and
and select token 0 (first one), which will get us our original line:

Take a look at line 7 through 10, here we check if there is a variable called "$OnlinePizzaSession" available, if not, the user running this function probably didn't run the "Connect-OnlinePizza"-function, and this function won't work. Therefor, we write an error and exit the function. This is a pretty good method to ensure that the functions are used correctly.

So, finally time for our last function!

Get-PizzaRestaurant
Most parts of this function will be created more or less in the exact same way as the last one, so I'll just go through the differences.

First of all, we want these cmdlets to work together in a good way to give them that "module"-feeling 🙂

One way of doing that is to add pipeline support, but how?

Well, this function will return a list of restaurants based on our location, and the location is based on our postal code (zip code). If you check our last function we actually return a property value called "PostalCode" which would be perfect for pipelining, and it's really easy to do!

All we need is "ValueFromPipelineByPropertyName=$true" when declaring the parameter, like this:

And we need to verify that the property in object we output match the parameter name:

Also, as you can see, we are declaring the parameter data type as an int, this way, no one will give as a postal code with spaces in it. If we want to, we could also validate that it really is a postal code, but again, this guide is not as much about writing advanced functions in general but has more to do with web scraping, so we'll just let it be.

A few more comments might be needed here, if you look at line 19, we use the opposite of split, the join-operator. Why? Well, when looking at the html-code of the site the information is spanning over multiple lines, by joining on linefeeds (`n = linefeed) we can get all the information for each restaurant as "one part" instead of multiple lines, which helps a lot!

Also, at line 32 and 33, we call a method called Trim(), this method removes all leading and trailing white-space characters from the string we're working on.

Finally, at line 45 we remove all the variables to prevent them from being "reused" on the next iteration of the loop if the next restaurants data is different or missing. Clear-Variable would work perfectly here aswell.

And that's it!

Result
We have now created functions to connect to a site, utilize functions that are only available when logged in and we have also made the functions work together in a nice way.