Tag: PowerShell

In my last post, I talked about mounting disk images in Windows 8. Both Windows 8 and 2012 include native support for mounting ISO images as drives. However, in prior versions of Windows you needed a third party tool to do this. Since I have a preference for open source, my tool of choice before Windows 8 was WinCdEmu. Today, I decided to see if it was possible to determine the drive letter of an ISO mounted by WinCdEMu with PowerShell.

A quick search of the internet revealed that WinCdEmu contained a 32 bit command line tool called batchmnt.exe, and a 64 bit counterpart called batchmnt64.exe. These tools were meant for command line automation. While I knew there would be no .NET libraries in WinCdEmu, I did have hope there would be a COM object I could use with New-Object. Unfortunately, all the COM objects were for Windows Explorer integration and popped up GUIs, so they were inappropriate for automation.

Next I needed to figure out how to use batchmnt. For this I used batchmnt64 /?.

Mounting and unmounting are trivial. The /list switch produces some output that I could parse into a PSObject if I so desired. However, what I really found interesting was batchmnt /check. The process returned the drive letter as ERORLEVEL. That means the ExitCode of the batchmnt process. If you ever programmed in a C like language, you know your main function can return an integer. Traditionally 0 means success and a number means failure. However, in this case 0 means the image is not mounted, and a non zero number is the ASCII code of the drive letter. To get that code in PowerShell is simple:

The Start-Process cmdlet normally returns immediately without output. The -PassThru switch makes it return information about the process it created, and -Wait make the cmdlet wait for the process to exit, so that information includes the exit code. Finally to turn that ASCII code to the drive letter we cast with [char].

I’ve been periodically hacking away at PoshRunner. I have lots of plans for it. Some of these are rewriting some of it in C++, allowing you to log output to MongoDB and total world domination! However, today’s news is not as grand.

Usage: poshrunner.exe [OPTION] [...]
Options:
--appdomainname=NAME Name to give the AppDomain the PowerShell script executes in.
--config=CONFIGFILE The name of the app.config file for the script. Default is scriptName.config
-f SCRIPT, --script=SCRIPT Name of the script to run.
-h, --help Show help and exit
--log4netconfig=LOG4NETCONFIGFILE Override the default config file for log4net.
--log4netconfigtype=LOG4NETCONFIGTYPE The type of Log4Net configuration.
--shadowcopy Enable Assembly ShadowCopying.
-v, --version Show version info and exit

I have a tendency to do odd things with technology so that things don’t just work. When I point out the obscure edge cases I find, most people tell me, “well don’t do that.” I usually ignore them and dream of tilting windmills. Well today a windmill has been tilted, and this is the epic tale.

Now all this is a lot scarier than it sounds. However, it stops two important groups of people from ever using PowerShell to call DLLs that absolutely require you to manipulate your app.config:

People scared off by the word AppDomain

People that realize they have better things to do than everything I described above

Lucky for these two groups of people, I wasted my time so they didn’t have to! The project is currently called AppDomainPoshRunner, and I ILMerge it (via IL-Repack) into poshrunner.exe. Right now poshrunner takes one command line argument, the path to a script. If the script exists it will run it in an AppDomain whose config file is scriptname.config. Log4net configuration is read from a file called ADPR.log4net.config in the same directory as poshrunner.config.

The full background is to long and convoluted for this post. This was all born out of a problem with calling New-WebServiceProxy twice in the same PowerShell console. I use log4net to write the console messages so this has the potential to be quite extendable. Then Stan needed to run PowerShell scripts from msbuild and was complaining to me about it over twitter. He didn’t like the hacky solution I had then. Eventually I realized this was the way to simplify my previous solution.

So download the zip file. Try it out. Complain to me when you find bugs!

Environments change and the solutions to support them have to keep up. I was very entertained with my old deployment solution for a good while. However, we eventually moved to Azure, and I needed to scramble to find something new. Tom Hollander’s Automated Build and Deployment with Windows Azure SDK 1.6 filled that void until I upgraded my project type to the 1.7 SDK. At that point, I realized I had to roll up my sleeves and cobble something new together.

From an automated deployment standpoint, the crippling change between the 1.6 to the 1.7 SDK, is the lack of an “ImportAfter” folder, which allowed us to include legacy msbuild files to attach to the build process. This is what Tom used to attach a Powershell deployment script to the SDK’s Publish build target. However, with the Azure 1.7 SDK, I had to figure out how to execute that PowerShell script myself.

Creating a Management Certificate & Publish Settings file

Visual Studio has a link which allows you to download a publish settings file, without completely explaining what the side effects are. I myself didn’t understand the problem when I encountered the first symptom, you have reached the maximum number of management certificates. I was forced to understand the situation when I tried to get publish settings files for the 2nd and 3rd Azure Subscriptions my account was associated with. The link creates a management certificate, uploads it to your azure account and provides you with a .publishSettings file to install onto your machine. Life is actually easier when we start taking control of our management certificates.

We can take this certificate and upload it to the Management Certificates console in Azure. Take note of your subscription id and the thumbprint of the certificate. As you will need it to create your publish Settings file.

Using the PublishSettingsCreator utility, we can create a publish settings file to carry our management information.

PS C:UsersAdministratorDesktop> Import-AzurePublishSettingsFile .ecd7cc1d-12ec-8cf6-a60b-0cf14db32020.publishsettings
Setting: AzureExample as the default and current subscription. To view other subscriptions use Get-AzureSubscription

Importing the settings file sets the subscription as default. We can get the default subscription as follows.

With the publish settings file and Import-AzurePublishSettingsFile, Set-AzureSubscription commands we can allow any machine to deploy using said Management Certificate. The certificate and publish settings file should be guarded well, either of these files allow access to your azure subscription.

Customizing the Build

Create a new build definition and configure it to build the solution. Be sure to add the “Publish” target to your build. The argument will cause the Azure 1.7 SDK to create the deployment package during build.

If you are using TFSBuild, you can do this while configuring your build, look for the field ‘MSBuild Arguments’, add set the value ‘/t:Publish’.

Finish configuring the build definition and queue the build. We are going to use the output from the build to test the powershell script.

Testing the Powershell script

Using the output from the build, we should be able to execute the powershell script. Copy the script over to your build server. Execute it with the path to the publish file, location where the output Package and Configuration file can be found and the name of the package file in that location. A tag can optionally be specified to help identify the build. I usually use the version number of the binary here, but the build label works just as fine.

Note: I’ve had some problems with the Azure Powershell Cmdlets and relative paths.

If your machine is configured correctly, this should deploy without any problems. If you are using TFS the next section is useful for you to wrap this all together. If you are using a build system other than TFS, you already have what you need to continue. Enjoy!

TFS Build Process Template

The Build Process Template included in the package takes a few arguments and handles the execution of the Powrshell script rather nicely. After choosing it as the template for your build, you just have to specify a few arguments.

Deployment Configuration: The build configuration of the Azure project, “Debug” or “Release”
Deployment Profile: The name of the profile file to be used
Deployment Project Name: The name of the project Azure project
Deployment Script: (Optional) In case you dont keep your deployment script in the same spot as mine

The build process template uses the Build Label as a tag for the build. It might be a bit more useful to use something like github: martinbuberl/VersionTasks to tag the builds.

This is part one in a series of blog articles in which I shed light on the internals of MSIs using the example of the MSI for Far Manager 3. It was inspired, amongst other thing, by lessons learned while creating the Far-2 and Far-3 packages for chocolatey. While the idea of writing those packages was so others would not have to learn the dark arts of command line manipulation of MSIs, I though I would write this series for those interested in how the sausage gets made.

I’ve written about my love of the File and Archive Manager before. I’m a command line guy so this utility naturally appeals to me. In addition to being a command line guy, I’m also very much a MSI guy. I always prefer installers to unzipping files somewhere, and I prefer MSI installers to exe based installers such as those made with the Nullsoft Scriptable Installer System. On the surface it might seem like these two loves would be in conflict. After all, when you run an MSI you get a GUI. However, there is a command line executable for installing MSIs, called MSIExec.

So using the example of a recent 64 bit nightly build of Far Manager 3.0, C:userszippyDownloadsFar30b2746.x64.20120624.msi, lets explore how we can install and uninstall an MSI from the command like.

Getting some help with msiexec /?

MSIExec comes with built-in help. To see it, type msiexec /? from the command line. Strangely, it will display that help in a window instead of the console. This is similar to the behavior of ntbackup.

Let’s install Far Manager!

Looking through the command line install options, /i is the switch to install an MSI. To have no GUI feedback I can use /quiet. If I want just a status bar I can use /passive. So to install far automatically, I can use the following command:

This will give you a default install of Far Manager 3.0. However, if you use Far, you are by definition a power user, and you probably don’t pick the default install options. It is possible to customize what features to install during a command line install. I’ll explain your options for doing that in part 2.

Time to uninstall it

I can only see one reason to want to uninstall Far Manager 3.0. That would be of course when Far Manager 4.o comes out! Since Far 3.0 is still under development, that event is probably at least 2 years away. However, we want to be prepared for that day. Also, Far is just the example we are using. There are other programs that are installed with an MSI that deserve to be uninstalled.

If you examine the msiexec /? documentation, then you will see that the uninstall syntax is msiexec </uninstall | /x> <Product.msi | ProductCode>. The /uninstall and /x switches are identical, but I prefer /x because its terser. ProductCode is a GUID that I’ll explain how to get later. For now, lets use the path of the original MSI. So in our example the command is.

That’s fairly simple. However, you don’t always have the MSI available for something you want to uninstall.

Finding the Product Code

The ProductCode is a GUID. The method for finding it is not obvious. I’ve resorted to using WMI, but there are probably other methods. Specifically, I use the Win32_Product class. The best way to do this from the command line is with the PowerShell cmdlet Get-WmiObject. The command to search for all instances of Far Manager installed via MSI on your system is:

As you can see, I have versions 2.0 and 3.0 of Far installed. I could uninstall Far 2.0 with the command msiexec /x {143F0C11-D9F3-4F1E-9037-67BBFDD379AD} /passive. If you want to get fancy and uninstall both at the same time, you can with the help of the Start-Process cmdlet.

The take home of this example was simple; using the @{} operator by itself created a System.Hashtable object, and the order of those keys are not guaranteed. Therefore not guaranteeing the order of the properties of the PSObject. However we could use [ordered] to make it an “ordered hashtable” (Shay’s words).

So I spent some time looking for the MSDN documentation for OrderedHashtable. I never found it so I decided to fire up my Windows 8 VM and see what its type is.

So I said to myself, “ok that’s cool, but can I cast from an OrderedDoctionary to a MongoDB.Bson.BsonDocument with the MongoDB .NET driver?” I say this because a while back I submitted some patches to improve the driver’s user experience in PowerShell. My main goal was to be able to use the HashTable notation to define a BsonDocument like so:

PowerShell lets you create web service proxies from WSDLs via the New-WebServiceProxy cmdlet. However, it only works for SOAP web services running on HTTP endpoints. If you have a WCF service using only non http protocols, such as NetTcp, you cannot use New-WebServiceProxy.

Now, I’ve created and consumed my fair share of web services. So when I began to use PowerShell, I quickly figured out this unfortunate fact. I always knew that I could make use of WCF API to generate the proxies, but I never bothered to figure out how. The annoyance was pretty academic to me. At some point I event told someone on stackoverflow that it can’t be done.

Using my version

My version of the script creates three functions:

Get-WsdlImporter

Get-WcfProxyType

Get-WcfProxy

The function that is most analogous to New-WebServiceProxy is Get-WcfProxy. I kept the name from Christian’s version of the code, despite the fact that New-WcfProxy would be more appropriate. In Christian’s version of the code, Get-WcfProxy only returned a System.Type of the generated proxy, not an instance of it. I renamed that to Get-WcfProxyType. Finally. Get-WsdlImporter takes a wsdl or mex endpoint and returns an instance of a System.ServiceModel.Description.WsdlImporter that represents that metadata.

By default Get-WsdlImporter tries to generate the WsdlImporter via metadata exchange, but it can parse a wsdl with the -HttpGet switch. Below are some examples illustrating its usage:

I don’t see much point for calling Get-WcfProxyType directly so I will not illustrate how to use it here. Get-WcfProxy is the function you want to call. Its main parameter is either a url or a WsdlImporter. You can either let Get-WcfProxy pick the first endpoint it finds in the WsdlImporter, or specify an endpoint and url you want to use as parameters as illustrated below:.

Today I was asked to do something that seemed simple, until I actually had to do it. A coworker had a database with two fields he wanted renamed in a specific way. For our example, lets call them ProductNumber and ProductName. He wanted ProductNumber to be sequential (1, 2, 3 . . .) and the ProductName fields to be called “Product A”, “Product B” . . . “Product Z”, “Product AA” etc. So this suddenly became a non-trivial problem if you had more than 26 rows, which of course I did.

So I rolled up my sleeve, got a fresh cup of coffee, and got to work. Populating ProductNumber was easy enough using a Common Table Expression (CTE) with a ROW_NUMBER(). Then I realized I could think of the English alphabet as symbols for a base 26 number system, with AA following Z and so on. The only problem was I couldn’t express that in a set based way for a clean T-SQL implementation. No problem, I’d just generate the T-SQL to make a giant mapping table in PowerShell!

I am ashamed to admit I had to look up the algorithim for converting from base 10 to another number. I was also surprised to discover that the first result google returned me was this tripod page.

The algorithm is as follows.

Start with an empty string which becomes the return value

While the value is greater than the base get the remainder of the value divided by the base. Convert that to its letter and prepend that to the return value

Repeat step 2 with the quotient of the value over the base.

When the quotient is less than the base, prepend that to the string instead.

It seemed simple enough, but there were some headaches.

The first thing I discovered was that when you divide integers in PowerShell, you get a float as a result. Also casting it back to an int rounds instead of truncating the results. I was expecting the opposite in both cases, because that is how C# behaves. I ended up using the unwieldy combination of Math.Floor() and a cast in the form [int][math]::Floor($currVal / 26) to resolve this. The MSDN technet has an article that recommends the more unwieldy [Math]::floor([int] $currVal / [int] 26), but I proved that my terser method gives the same results.

Then I had problems with how to display powers of 26. The way it was supposed to work was that 1 = A, 24 = X, 25 = Y, 26 = Z and 27 = AA. However, depending on how I did it I ended up with 26 = AZ or 27 = BA. I could not account for this edge case, nor compensate for it with special conditions.

Then it dawned on me, A needed to be equal to zero not one. A base 10 system deals with the digits 0-9. Base 2 deals with 0 and 1. Base 16 deals with 0-F and F is 15. Once I rewrote my script to work that way, edge cases disappeared, and things just worked.

If its not clear how I generated upper case letters, the ASCII codes for A through Z are 65 through 90, and casting an integer to a char converts it to its ASCII code. Ergo, the expression [char]65 evaluates to “A”.

It’s an all to familiar story. You need to reboot a server, and then you need to start a remote desktop connection into it. So what do you do? You open up a command prompt, type ping -t <HOSTNAME> and wait until the server responds to pings. When that happens, you keep trying to connect via remote desktop, until it works. There’s got to be an easier way. You should write a PowerShell script to automate the process. However, its one of those things that not quite annoying enough to get you to actually take action and write the script. Luckily, thanks to the power of twitter, I reached a tipping point this week, and wrote the script. It all started out with some innocent whining:

@zippy1981
Justin DearingYou know what I need in an RDP client. I need an “”I just rebooted so ping it for me and autoreconnect”.2 days ago via web · powered by @socialditto

Then d0tk0m and Yanni Robel retweeted my whining. They say necessity is the mother of invention. In this case the commiseration off two tweeps was the father. So I spent a Saturday with PowerGUI, and came up with a script to solve the problem.

Planing stage

I wanted my script to automate what I already did. From the perspective a system administrator that wants to reboot a server and then remote desktop into it, the following happens.

All the processes, including the remote desktop service (termsrv.dll) are shut down. When that happens the remote desktop port (default 3389) no longer has anything listening on it.

The server will finish shutting down, the BIOS will POST, and Windows will begin booting

Eventually the network adapter will come up and start responding to pings.

The remote desktop service will start and bind to the remote desktop port.

Therefore, I made my script to do the following.

Ping the server until it answered five successive ping requests. Yes this might be naive and optimistic in many cases. However, it worked in my use case. I used the Send() method of the .NET System.Net.NetworkInformation.Ping class to do this.

Once I was sure the host was up, I’d try to connect to it on port 3389, or another port if I passed a different port as a parameter. To do this I used the System.Net.Sockets.TcpClient class.

After this it was simply a matter of passing the right parameters to the remote desktop client, mstsc.exe. I initially attempted to use the simple & mstsc <Arguments>. However, that didn’t work to well so I ended up resorting to Invoke-Expression.

The script

Below is the current version of the script, hosted on poshcode.org. The current version is stored in gist repository. While you are free to post changes to poshcode.org (or anywhere), in accordance with the license, I’d prefer if you notified me of any changes so that they may be placed in the gist git repo.

Readers of this blog know that I’ve been using MongoDB for a while, and I’ve recently become very excited about Powershell. Well recently I’ve been able to combine the two together for pure dynamically typed, schema-less, non-relational awesomeness. Such awesomeness is begging to be shared.

Since the Csharp Driver MSI is 32 bits, it creates the registry entries in the Wow6432Node. Therefore, we have to check to see if we are running in the 32 or 64 bit version of Powershell . Credit to an anonymous commenter on the msgoodies blog for providing this size of a pointer trick to determine if you are running a 32 or 64 bit system.

The next thing we want to do is to create a BSON document. This is surprisingly easy.

As you can see Powershell can convert a HashTable to a BsonDocument. This is because of the public constructor BsonDocument(IDictionary hashTable). Powershell can use these one parameter constructors to cast an object. You can use the same Hashtable trick for the QueryDocument and UpdateDocument classes.

Now that we have our BsonDocument, its time to perform basic crud operations.

As you can see, its not very hard to use the 10Gen MongoDB Csharp driver from within Powershell. Using Powershell with the MongoDb C-Sharp driver has many possibilities. First of all, adhoc mongodb queries from inside of powershell. Secondly, The code for this example is available in its entirety here.