Trying to prove that Skynet should be running on PowerShell!

Monthly Archives: October 2013

There are certainly other solutions out there which are great, even excellent. For me personally, it’s mainly because it’s fun to be able to build parts of it by yourself, you learn a lot by doing it, but you can also base your tasks on almost any piece of information out there.

I’ll give you an example!

I’ve started to go to work after the traffic calms down in the morning, I have a pretty good idea of when this usually happens, but sometimes there is no traffic jams at all, and sometimes it’s completely hopeless.

Wouldn’t it be nice to be able to utilize live traffic information on the internet, and based on that trigger your wake up call? At least I thought so 🙂

First of all, try to find a provider for traffic information near you, and make sure you don’t break their ToS by fetching that information in a automatic way (ie. web scraping).

I won’t go into detail on how to build a webscrape-cmdlet right now, but I’ll show you how to use it when it’s done. (A guide to web scraping available here, here and here.)

This is the script I run every morning, I think the code comments will be enough to explain how it works:

# Import the Telldus module
Import-Module '.\Telldus.psm1'
# Import the Module containing your traffic parser
Import-Module '.\WebDataModule.psm1'
# Set your home and work address
$HomeAddress="Homeroad 1, MyTown"
$WorkAddress="Workaround 1, WorkTown"
# Set a max traveltime limit (in this case, in minutes)
[int] $TravelTimeLimit = 30
# I want it to be under this value for $NumberOfTimes consecutive times
[int] $NumberOfTimes = 3
# Make sure it does'nt get $True on first run
[int] $CurrentTravelTime = $TravelTimeLimit+1
# Reset variable to zero
[int] $NumberOfTimesVerifiedOK = 0
# Run until the traveltime limit has been passed enough times
while ($NumberOfTimesVerifiedOK -lt $NumberOfTimes) {
# Reset variable
$CurrentTravelTime = $null
# Load new data, the "Get-Traffic"-cmdlet is my traffic parser
[int] $CurrentTravelTime = Get-Traffic -FromAddress $HomeAddress -ToAddress $WorkAddress | select -ExpandProperty TravelTime
# Check if it is below your traveltime limit, and that it is not $null (cmdlet failed)
# Increase $NumberOfTimesVerifiedOK if it was ok, or reset to zero if it wasn't
if ($CurrentTravelTime -ne $null -AND $CurrentTravelTime -lt $TravelTimeLimit) {
$NumberOfTimesVerifiedOK++
}
else {
$NumberOfTimesVerifiedOK = 0
}
# Write current status
Write-Output "Traffic has been verified as OK $NumberOfTimesVerifiedOK consecutive times"
# Pause for a while before checking again, 10 minutes or so...
Start-Sleep -Seconds 600
}
# The while loop will exit when traveltime has been verified enough times.
# Write status
Write-Output "Initiating sunrise, current travel time to $WorkAddress is $CurrentTravelTime minutes, and has been below $TravelTimeLimit for $NumberOfTimes consecutive times."
# Time to initiate the "sunrise effect"
# Set the device id for the lamp you want to light up
$BedroomLampDeviceID="123456"
# Set start dimlevel
$SunriseDimlevel = 1
# Set how much it should increase everytime we "raise" it
$SunriseSteps = 5
# Set your Telldus credentials
$Username="[email protected]"
$Password="MySecretPassword"
# Kick off the "sunrise-loop"
while ($SunriseDimlevel -lt 255) {
# Write some status
Write-Output "Setting dimlevel to $SunriseDimlevel"
# Set the new dimlevel
Set-TDDimmer -Username $Username -Password $Password -DeviceID $BedroomLampDeviceID -Level $SunriseDimlevel
# Sleep for a while (30 seconds makes the "sunrise" ~30 minutes long depending on your $SunriseSteps value)
Start-Sleep -Seconds 30
# Set the next dimlevel
$SunriseDimlevel=$SunriseDimlevel+$SunriseSteps
}
# Set the lamp to full power (loop has exited) and exit
Set-TDDimmer -Username $Username -Password $Password -DeviceID $BedroomLampDeviceID -Level 255
Write-Output "Maximum level is set."

This script is scheduled to run in the morning (on week days) around the earliest time I want to go up, the first loop will run until traffic calms down, and then start the “sunrise”-loop which will run until the light reaches its maximum level (255).

You could of course turn on other stuff as well, like a coffee brewer (make sure you don’t do this while you are away…), a radio, play some music or something else.

Since we moved to an apartment with a balcony we wanted to have some plants there. The problem is that we’re a lot better at killing plants than actually taking care of them. And even the few days when we actually did remember to water them, the plants could dry out on hot days.

So, how to solve this? With PowerShell of course! 🙂

You could start of with the Telldus PowerShell Module from this blogpost (though, this one is preferred since it handles credentials a lot better). The module is under development but works for checking temperatures, dimming or turning on/off devices etc.

You also need something to water with, I went with a pump from Gardena which runs for 1 minute when turning it on, or 1 min/24h if turned on constantly. This means that switching it off/on will start a watering session. The waterflow is controlled by using different outlets from the pump.

After adding the devices to Telldus Live! you can just open up your favorite powershell editor, import the module and start coding. (The module requires at least PowerShell v3)

An example of how it can be done follows. (The below module has been updated, make sure you check out the new version that uses a PSCredential among other things!)

First import the module, set the credentials and some variables needed: (passthrough for credentials (using “secure string”) is on the ToDo-list)

Import-Module C:\Scripts\Telldus\Module\TelldusModule.psm1
$Username="[email protected]"
$Password="MySecretPassword"
# At what temperature (and above) should I start the pump?
$WhenToWaterTemp=25
# If the humidity get's below this value, I will also start the pump
$WhenToWaterHumidity=40
# How old could the sensor data be before I'll consider it invalid? (in hours)
$DataAgeLimitInHours=2
# Time zone difference compared to telldus servers.
$TimeZoneDifference=2

You don’t need to check for how old the sensor data is, but these sensors can sometimes stop updating, and if they do when it’s hot outside, you might end up drowning your plants if you schedule the script to run often.

That’s basically it! I added some functionality for doing push-notifications to my phone (Growl API and Boxcar works great for this). Or you could just send an e-mail (you probably want to know if the sensor data isn’t getting updated, for example).

Hope this gives you an idea on what you can do with PowerShell and home automation.

I will try to add other things that I’ve automated with this module when I get the time!

During our migration to Office 365, we first needed to make sure all the users had an account in Active Directory. In our case, a lot of them didn’t since they only used Notes-applications.

The provisioning/sync of these accounts is for another blog post (or several), but I thought I could share some of the code we use for setting passwords.

First of all, you need something to generate the passwords with. There are a lot of scripts written for this (a good blog post that helped me getting started is this one), but most of them I found are using the same process for creating the passwords.

For example:

Get-Random -Minimum 65 -Maximum 90 | % { [CHAR][BYTE]$_ }

This will generate a random upper case letter between A and Z. If you change the range you can generate any character you want using this method. If you would like some help in finding which character is corresponding to what number you can have a look here.

What most of the scripts I found were lacking was a method of making sure that the passwords generated is actually following the password complexity rules. The passwords generated were random, and of the correct length, but you didn’t know if the different characters required were actually in the password.

So I ended up with doing my own function for this. It’s pretty short so I’ll do a code walkthrough of the important parts in it.

First, we need to specify what characters must be included in the password. I did this by creating four different arrays. You might want to have some control of which special characters are included since some applications (if the password is not for Active Directory accounts) can’t handle some of them. And you might not want your users to have to change their keyboard layout to be able to log in 🙂

$CharacterString now contains all the characters we need. But we don’t want the passwords to always have the character types in the same order (first a capital letter, then a small, then a number then a special character, and then random) since that would make the password a lot weaker.

To fix this we turn it into an array and then randomize the order of the characters, and finally send it back to the pipeline.

Like this:

# Create an char array of the characters
$TempPasswordArray=$CharacterString.ToString().ToCharArray()
# Randomize the order of them
$PasswordToReturn=($TempPasswordArray | Get-Random -Count $TempPasswordArray.length) -join ''
# Send it back to the pipeline
Write-Output $PasswordToReturn

This password will always contain all the character types we have specified. And they will always be random.

The complete code (which includes the $NrOfCharacterTypes and some other things), are available here.

Even if you configure your User Profile Manager in Citrix correctly, keep it up date and so on, you might end up with the same problem we had; every now and then a profile contains a file that is locked so the profile doesn’t get cleaned up after user log off.

When that user logs on again they might end up with a “username.domain001”-profile, then 002 and so on. And the user profile settings soon become pretty messy.

One way of making this problem less of a pain, is to have a script clean up the folders (and of course the registry values) to make sure the users can get their profiles loaded correctly.

As always, use any script you find on the internet with caution, and test them fully before deploying anything in your production environment.

A script walkthrough follows:

First of all, I usually set the variables that might differ from different environments and/or domains etc.

Most of these are pretty obvious in this script, but one that might require some explanation is this one:

This just collects all the local users on a server/computer. We didn’t want these profiles to get cleaned up since they don’t have any central profiles.

The next ‘weird’ one:

# Tune performance with $SpeedBrake. Lower is quicker, but uses more CPU.
$SpeedBrake=1

This variable is used in a “Start-Sleep”-cmdlet later, this is to make sure the script doesn’t steal a lot of CPU-resources.

The next one is farily obvious, but I’ll explain it anyway:

# Add a regex that works for your usernaming standard
$UserNameRegEx="\w"

This should be changed to match whatever you have as a standard for your SamAccountNames. the “\w” will more or less match anything, so this should be changed to something that matches your environment. This is just used for extra safety, you might not want to end up deleting a service account profile for example.

Speaking of service accounts, if you have a naming standard for those, add it here:

# Add a list of other accounts you want to exclude
$ExcludedUsers="Public","ctx","svc"

If the account name contains for example “svc”, it will be ignored by the script.

But what about logged on users? We don’t want to remove their profiles, do we?
The hard thing about this, is that there was no easy way (like a cmdlet) to list logged on users. (at least not when I wrote this script a while back), so I went with the solution of listing all current processes and their owners, and select those that are unique. That looks like this:

Now we should have all the information we need to safely go through all the profiles and remove those that we don’t want anymore.

I wont go through this part “line by line”, but I think you will get the point when reading the code comments.

The first part of the code looks like this:

# Start to loop through all the profile folders
foreach ($LocalProfileFolder in $LocalProfileFolders)
{
# Sleep to prevent CPU load
sleep -m $SpeedBrake
# Set this variable to True, it will be changed if it should stay later.
$ThisProfileShouldBeDeleted=$True
# This this folder match the username regex?
if ($LocalProfileFolder.Name -match $UserNameRegEx)
{
# Make sure it doesn't match any of the "ignored" users, (logged on ones etc...)
foreach ($UserToIgnore in $UsersToIgnore)
{
# Again, sleep to prevent CPU load.
sleep -m $SpeedBrake
# Check if it matches a "ignored" user
if ($LocalProfileFolder.Name -like "*$UserToIgnore*")
{
# If it did, it should not be deleted.
$ThisProfileShouldBeDeleted=$False
}
}

So it starts by setting $ThisProfileShouldBeDeleted to $True. It then does a couple of checks (regex matching etc) to make sure that the users profile is OK to delete. (If not, it sets $ThisProfileShouldBeDeleted to false.)

A few things to point out, the script always tries to delete the profile through WMI since this is the “cleanest” way of doing this (removes the registry values together with the folder). This is the same thing as going through “My Computer” to remove the profile.

If there was no “WMI-profile”, it just deletes the folder. I know it looks terrible when using “cmd /c rd /s /q” instead of Remove-Item to do this, but Remove-Item has a bug that makes it a bad idea to use it for this sort of thing. (It has to do with how it handles symbol links).

The Active Directory Module for PowerShell is great. You can do almost anything with it, but every now and then you might need to list the local groups and their members on a server/client, and that is harder…

To achieve this I wrote a couple of advanced functions to simplify the task. “Get-LocalGroup” and “Get-LocalGroupMember”.

When deploying SCOM (System Center Operations Manager) in a multi-forest environment, you use certificates to establish the trust between the servers. Since we have CA Servers in every domain, we started up with configuring autoenrollment for all the SCOM Gateway Servers, and made sure all the different CA servers were added to the trust-store of the central servers. (I will not go through that process now, if you want me to, leave a comment).

So autoenrollment now works, but that isn’t really enough, is it? We still need to configure the Gateway Server to actually switch to the new certificate when it arrives.

The tool Microsoft has given to us to do this is MOMCertImport.exe, but that tool gives you a pop-up that you actually need to click on… Not very “automatable”.

After some research, we could find that all this tool seems to do is to add the certificates serial number, backwards (in pairs), as a binary key in the registry. That is very automatable! 🙂

Before you start, you should know that this method is probably NOT supported by Microsoft, on the other hand, if it fails, you could run MOMCertImport.exe and see if that helps…

A code walkthrough follows:

Let’s start with setting up some user controlled variables, like what Certificate Template is used and where the registry key is located:

It’s now time for some regex-magic again, we want to pair this number up (2 and 2), and then reverse those pairs. I must confess I did a couple of rewrites of this before I found one that seems quite effective:

# Reverse the serial number to match the format in the registry
$ReversedPairs=[regex]::Matches($SerialNumber,'..','RightToLeft') | ForEach-Object { $_.Value }

The two dots (‘..’) tells powershell to group them, and the ‘RightToLeft’ reverses them. The last foreach is to get only the values and nothing else.
But it needed to be in binary format aswell, we achieve that by doing this:

There are obviously a lot of things that needs to be migrated when changing mail platform, one of those things that are a “very nice to have” is user pictures/thumbnails which shows up in Outlook, OWA, Lync and so on…

But this can be quite a challenge. The pictures are stored in the “ThumbnailPhoto”-attribute as a byte array, which needs the picture size to be less than 10 kb to work in lync/outlook (actually, the “Active Directory Users & Computers”-snapin wants it under 8 kb to be shown in the attribute editor so this is what we choose).

The pictures from Notes are “user generated”, they can be gif, jpg, png, bmp or something else, and I’m guessing this would be the case for many large organisations out there.

The solution we used was to add a step in our “Notes to Active Directory”-script to check if a picture is available, if it is, it uses a small “advanced function” I wrote to convert the picture to a byte array, and save a copy of the new file on disk, it’s called ConvertTo-ADThumbnail. It also output’s the changes in size if that’s needed.

Home Automation is probably not something we can’t live without. But it is fun 🙂

A while ago I bought a little device called “Tellstick Net“, which is a device made for controlling other electrical devices, like dimming your lamps, turning on/off sockets, or for receiving data from different sensors, like temperature.

It had one obvious disadvantage though, it didn’t have PowerShell support…

Luckily this is a problem that has an easy fix! I went with a “quick and dirty” method of talking to the API site, but it has worked pretty much flawlessly for over a year now.

I will post more information about this project on this blog, but I thought I could start with some screenshots to help you get an idea on what you can do: (sorry for naming my devices in swedish…)

This module has for example been used for water plants during vacations (giving extra water during hot days), turning of lights when launching media players etc…

This module is now published at the PowerShell Gallery. You can get the latest version at this link.

Every now and then I need to be able to parse a DNS debug log, it’s useful in many different scenarios. I wrote an advanced function to help me with this, specify a file name or pipe log lines (or file names) to it and it will return a properly formatted object.

Be aware that I only added some of the different date formats I could find, so verify that it works for your server.

During our migration to Office 365, we ran into an issue with creating Master Lists for the migration tool. The tool just creates one huge file with all of the users in it, but we want to migrate them based on different things like mailbox size, where in the organisation they are and so on.

When we have the users we want to migrate in a list, we need to split that list up for scaleability reasons (multiple migration tool servers), and since the files need to be formatted in a quite specific way, this was becoming a pain…

Just browse for the master tsv file, the columns found in the file will be automatically populated in the droplist. Choose which one you want to do the matching on (in our case targetaddress, same as e-mail/UPN):

Select the other settings, should be pretty obvious:

And hit “Build file(s)”, and watch it go:

If you just want to split the master file, that’s possible aswell, just tick that box and hit the Build-button:

I hope someone else might have use of this little form!

The code is available here and it requires at least PowerShell v3 to run properly.