It’s an all to familiar story. You need to reboot a server, and then you need to start a remote desktop connection into it. So what do you do? You open up a command prompt, type ping -t <HOSTNAME> and wait until the server responds to pings. When that happens, you keep trying to connect via remote desktop, until it works. There’s got to be an easier way. You should write a PowerShell script to automate the process. However, its one of those things that not quite annoying enough to get you to actually take action and write the script. Luckily, thanks to the power of twitter, I reached a tipping point this week, and wrote the script. It all started out with some innocent whining:

@zippy1981
Justin DearingYou know what I need in an RDP client. I need an “”I just rebooted so ping it for me and autoreconnect”.2 days ago via web · powered by @socialditto

Then d0tk0m and Yanni Robel retweeted my whining. They say necessity is the mother of invention. In this case the commiseration off two tweeps was the father. So I spent a Saturday with PowerGUI, and came up with a script to solve the problem.

Planing stage

I wanted my script to automate what I already did. From the perspective a system administrator that wants to reboot a server and then remote desktop into it, the following happens.

All the processes, including the remote desktop service (termsrv.dll) are shut down. When that happens the remote desktop port (default 3389) no longer has anything listening on it.

The server will finish shutting down, the BIOS will POST, and Windows will begin booting

Eventually the network adapter will come up and start responding to pings.

The remote desktop service will start and bind to the remote desktop port.

Therefore, I made my script to do the following.

Ping the server until it answered five successive ping requests. Yes this might be naive and optimistic in many cases. However, it worked in my use case. I used the Send() method of the .NET System.Net.NetworkInformation.Ping class to do this.

Once I was sure the host was up, I’d try to connect to it on port 3389, or another port if I passed a different port as a parameter. To do this I used the System.Net.Sockets.TcpClient class.

After this it was simply a matter of passing the right parameters to the remote desktop client, mstsc.exe. I initially attempted to use the simple & mstsc <Arguments>. However, that didn’t work to well so I ended up resorting to Invoke-Expression.

The script

Below is the current version of the script, hosted on poshcode.org. The current version is stored in gist repository. While you are free to post changes to poshcode.org (or anywhere), in accordance with the license, I’d prefer if you notified me of any changes so that they may be placed in the gist git repo.

Readers of this blog probably think I have an obsession with editing my system path. That belief is absolutely correct. I even added a tag on this blog for the articles about path manipulation. I am a command line junkie who is constantly trying out new tools so I have to add them to my path. I’ve written about doing this from powershell here and here, as well as doing it with setx. While these methods are good, I wanted something better. I got better with pathed.exe.

pathed.exe is a program that lets you edit both your user and the system path. It only manipulates the path, not other environmental variables. The reason for this extreme specialization is that pathed is specifically designed for appending to and removing from the path. It treats the path as a semicolon delimited array, which is of course what it is. For example, I just ran it now on my machine as I was writing this article (note: live coding is less embarrassing when you do it on a blog).

If you notice, their happen to be two copies of the path to mercurial on my path. Well lets fix it right now:

For me, the reference implementation of mixed emotions is the combination of anger, relief, joy, and frustration when “Why isn’t there a way to do X?!” becomes “How come no one ever told me about Y?!” This past Friday, I got to experience that entire bag of emotion thanks to setx.exe.

Setx (technet – ss64) is a command line utility that sets environmental variables permanently on windows. This behavior is distinct from the set command (technet – ss64) which only affects the current cmd.exe session. To clarify, there are three levels of environment variables:

Machine level All users on a given machine see these

User Level Each individual user on a system has a set of these variables

Session Level When you actually spawn a cmd.exe process, it allows you to have a set of transient variables for the session

Now, until I knew about setx, I had two ways of setting environment variables permanently. The first was to go through several layers of the windows GUI. The second, preferable to me, method was to use PowerShell as I illustrate elsewhere on this blog. However, that method requires a lot of keystrokes or some aliasing. Setx however simplifies the syntax quite nicely.

One thing to note about setx, as per ss64, it is available on windows 7 and through the resource kits. If your windows installation does not have setx.exe, try installing a resource kit.

One of the cool things about IIS 7.0 and WCF is the ability to serve WCF endpoints with non-http bindings. Naturally, this new feature presents new opportunities for the developer to get frustrated by WCF configuration headaches. This blog post is about one of them.

I was writing a WCF web service that had three endpoints, Json, Soap 1.1, and net.tcp. This services primary purpose was to be the middleware for the mongo database where my applications data was held. In the end, I didn’t need the net.tcp endpoint, making this exercise a complete waste of time. The reasons for my architectural decisions for this app are the subject of another blog post. For now, lets just say if you are supposed to learn more from your failures than your successes, I should get an honorary doctorate for this app.

In the past I’ve mixed soap and net.tcp in EXE hosted WCF services with great success. However, since I’ve yet to find a JSONP endpoint solution for WCF that allows request parameters, I needed to host this in IIS on the same site as my project. So I made a .svc file in my website, added the service dll as a reference, and ran it. I promptly got the the following error:

It was a bit frustrating to find the solution, but I did eventually. In IIS manager you have to select Advanced Settings in the applications Application folder, or website if the app is running at the root. In the advanced settings dialog is an option called Enabled Protocols. It probably contains the value http or http,https. You simple have to append ,net.tcp to the current value.

After that, everything works.

As an epilogue to this adventure, since I forgot to take all the screenshots needed for this blog article at work, I ended up having to make an example project to reproduce the error at home. As such I took an older example WCF project I wrote called EchoService and adding a website host to it in addition to the exe host. This improved version of EchoService can be found at the justaprogrammer github org. Feel free to use this service s the basis for any WCF related instructional materials. The code is licensed under the very permissive MIT license.

I was tempted not to post this article since it is little more than a link to someone elses blog article. However, said article is so useful its worth sharing in a manner more permanent than a tweet.

A while back I was trying to get some PHP code to run on a Redhat Enterprise Linux server. Long story short, the code required PHP 5.3, RHEL does not package PHP 5.3, I didn’t feel like compiling PHP, and it was late Friday afternoon. So I googled around and discovered that someone made a yum repo of PHP 5.3 binaries. For once things worked magically and I went home at a reasonable hour. Such things rarely happen to me so is worth noting.

I will add a final postscript. If you look at the blog article you will realize that this repo has been maintained for a while. The repo originally contained php 5.2.10, and now contains PHP 5.3.3. Therefore, it seems a safe bet that this repo will continue to be updated.

A while back, I demonstrated some PowerShell one-liner-fu for path management. I also pointed out that my machine had duplicate entries in its path. Most people would not care about this at all. Most of the few people that do care would cleanup their path by hand. However, there are a mentally deranged few who realize the world needs a PowerShell script to clean up our paths for us. Luckily for you, I am that kind of crazy. However, I don’t stop there. I also show you how to search the registry to add items to your path.

Warning: I don’t know much about PowerShell and I’ve not tested this script nearly enough. This thing messes with your system path, and things can go really bad if you run it and it messes up your path. Be very careful running it. If you run it your the fool who follows a fool, which makes you Han Solo, not Ben Kenobi. However, your shooting your %PATH% not Greedo.

The script makes use of multi dimensional arrays and foreach loops. I’d be lying if i said that it was efficiently written or bug free. However, it works and does a few clever things with PowerShell.

Notes about this script

Your path is a combination of two Environment variables. Those are the machine %PATH% and the user %PATH%. My script only concerns itself with the machine level path. Therefore I don’t use $ENV:PATH, like I do in my one-liner example to get the original path. Instead I use [Environment]::GetEnvironmentVariable(‘PATH’, ‘machine’).

Another thing I do is trim off trailing slashes at the end of the path. This means C:\windows\ becomes C:\windows. This may seem like excess, but it also allows me to remove duplicates that only differ because one has a trailing path, and one lacks it.

Two other caveats exist with regard to comparison. One is differences in case. Luckily, String.Replace() is case insensitive by default so my script handles it. The other is environment variable expansion. On windows if you stick %SYSTEMROOT%\system32 in your path, the OS will expand that to c:\windows\system32. My script does not handle this at the moment. Hopefully it will in the future.

I have some very simple registry detection to detect if a few programs are installed. This will probably remain a hard coded list of programs, and registry entries that determine the installation path. I’m hoping to also add detection through windows services. This would be good for something like mongodb that lacks an installer, but can self install as a windows service.

Finally, I’d like to point out that the script is UTF-16 encoded. This is simply because I wrote it script in PowerShell ISE, which decided to save it in UTF-16 format. Rather than change it to UTF-8 so git could easily diff it, I decided to fix git. This stackoverflow question provided the guidance I needed.

Conclusion

Once again, I’d like to reiterate that you must be careful when using this script. There might be some serious bugs, and messing with your path can lead to bad things happening. Also, while I will probably blog about updates to the script, the fastest way to keep track of its changes is to look at the github commit notes, and the source code itself. Finally, if you find a bug in my script or make and improvement, patches will be accepted, and due credit given.

Update: an older blog article exists on Chris Conway’s blog. The directions are out of date, but its an interesting read to get a historical perspective of the improvements mongod’s windows support.

Unix is an OS built around a worse is better philosophy. Part of that philosophy is defining things through convention. This has many advantages. One is thats its really easy to write a program that can run in both the console and as a daemon.

The mongo server, mongod, is a perfect example of this. If you run mongod in the console, it spews all its output to stdout. This is great for development and testing. However, if you want to run mongod all the time, its very simple to run it from an init script.

On the windows side of things, its more complicated. In windows, daemons run as services, except apparently if you are using an Azure instance. Services operate separately from interactive processes. Actually, thats not entirely true. You can have a service that interacts with the windows gui if you want to. As I said, its complicated.

In order for a windows program to operate as a service, you have to make certain API calls. Its actually not that hard in its most basic form, and there are well established patterns for doing it.

Furthermore, there is actually a wrapper program in the windows resource kit tool srvany.exe. It will allow you to turn almost any console program into a windows service. However, it is not an ideal solution. Luckily a programmer by the name of Alan Wright added proper windows service support to mongod. It was a well implemented service wrapper and I have made good use of it. I have also contributed some modifications to it that 10gen graciously accepted into their repo. The result is a really clean, but powerful service implementation built into mongod. I shall now demonstrate the power of this fully armed and operational death. . . I mean demonstrate the power of mongod’s windows service support.

Before we Begin

First, You are going to want to download the latest stable version of mongo. As of this writing that is mongo 1.6.3. Since mongo is evolving so rapidly, some of the more advanced features related to windows service support, not covered in this article, are only available in the unstable 1.7 series. Things move fast on the bleeding edge.

Second, you want to make sure you have mongod installed in a sane location. My definition of sane location is pretty much anyplace outside of C:Documents and Settings or C:Users. This also means on a hard drive permanently attached to your system. Theres nothing wrong with running mongod off an external hard drive if its always plugged in. Just keep in mind you won’t be able to unplug it while mongo is running, and the service will fail to start up if you boot your system without the drive plugged in.

Third, you want to be able to run mongod from a command prompt using the same switches as you wish the service to use. Please note that you are required to use logging when running mongod as a windows service. In my case I will run mongod like this:

What I am doing here is not overwriting the log every time I start mongo, and only listening for local connections. If mongod is running on the same machine as your web server, this is a good idea. I am also running with authentication.

Finally you want to make sure your command prompt has administrative access to your computer. Mongod will not raise a UAC prompt and elevate its own privileges. However, there’s a ticket for that. So make sure your do all the following steps from a command prompt running as an administrator. Also, note that in a future article I will talk about running mongod as a service using an unprivileged user.

And now we install the service

So you’ve worked out your particular command line options, made your data andlog folders, etc, etc. Double check your mongo log to make sure there are no errors. Now we are ready to install mongod as a service. To do this we simply append –install to the command prompt. So our install command looks like:

C:\Program Files\Microsoft SDKs\Windowsv7.1>mongod --auth --logpath c:datalogmongo.log --logappend --bind_ip localhost --install
all output going to: c:datalogmongo.log
Creating service MongoDB.
Service creation successful.
Service can be started from the command line via 'net start "MongoDB"'.

Now lets say to want to change the parameters. For example, you decide to run without authentication. If mongod is already installed as a service, and you want to change the command line parameters, then you have to use –reinstall instead of –install. So lets try that now:

C:\Program Files\Microsoft SDKs\Windowsv7.1>mongod --logpath c:datalogmongo.log --logappend --bind_ip localhost --reinstall
all output going to: c:datalogmongo.log
Deleting service MongoDB.
Service deleted successfully.
Creating service MongoDB.
Service creation successful.
Service can be started from the command line via 'net start "MongoDB"'.

So as you can see –reinstall removes the service and then installs it again. Pretty self explanatory.

Ok and finally we want to cleanup. To remove mongod as a service, we will use –remove. The output:

You will note that this command seems extra verbose. This is because messages that would normally be sent to the log are being sent to stdout.

Starting and Stopping the Service

There are several ways to start and stop a windows service. You can use the Service Control Manager or SCM of course. However, since we are on the command line already, we might as well use that. The command to start our mongo service is “net start mongodb.” Likewise, “net stop mingodb” stops our service. The service is configured to automatically startup at boot time, which is probably what you want. If not you can tune this behavior in the SCM.

Further Directions

This is only the tip of the mongo as a windows service iceberg. More options are available. I will be discussing them in depth in future articles.

To say I’ve fallen in love with PowerShell is an understatement. PowerShell is what perl would be; if perl was object based instead of stream based, and lacked all the “culture” of perl. I use PowerShell for a lot of things lately. Recently, I’ve been using it to manage my path, thanks to this PowerShell tip of the week. This has inspired me to share two one-liners related to path management.

The first allows you to add an item to your path. Few things annoy me more than windows programs that don’t include installers, and few installers annoy me more than those that install console programs and don’t offer to update your system path. Correcting this used to be a matter of going into nested levels of dialog windows. However, I can now do it from the PowerShell console. The command to append to your path is as follows:

“Machine” makes this environment variable apply to the entire machine, as opposed to the current user.

So there you have it, instead of digging three levels deep into menus to edit your path, you simply dig up this blog article and copy and paste that one line into PowerShell 🙂

The next one-liner is to determine what’s in your path. Simply typing $Env:PATH will print out your path in the same manner as the cmd.exe command path does. However, you probably want something more human readable. How about if each folder on your path was on a new line and the paths were sorted? Well thats quite easy:

.Replace() is string.Replace(). Some of your path folders might be quoted, removing the quotes will cause them to be properly sorted.

.Split(‘;’) is String.Split(). We want to turn our path into an array of strings, one folder per string.

| Is a pipe. This works like cmd.exe and unix piping, excet its object based, not stream based.

Sort-Object -unique does exactly what you think it does. It sorts the folders in your path and removed duplicates. Sometimes paths contain a folder multiple times. Cleaning up the duplicates will be addressed in a future blog post.

This one liner is good to run when you inherit someone else’s workstation, or a server setup by someone else. One limitation it has is it does not expand environment variables. Ideally I’d like it to expand %SystemRoot% to C:Windows. However, Sort-Object is case insensitive by default, like the NTFS file system, so casing issues are not a problem.

This is not quite a PowerShell first impressions article. I’ve toyed with PowerShell a few times before previously. Most notably, I toyed with the PowerShell TFS cmdlets that come with Team Foundation Server Power Toys a little under a year ago. However, I never stuck with it long enough to retain any of the syntax. Recently, I discovered a one liner by chance in Buck Woody’s blog. This lead me to do some serious PowerShell tinkering today. I’m not quite a seasoned PowerShell novice, but I believe I am now on my way there.

So here is the script in question:

Get-EventLog System | Where-Object { $_.EntryType -eq “Error” }

It is a simple one liner to get all the errors in your event log. The main work horse is the Get-EventLog cmdlet. I spent a good chunk of time playing with it. I came up with a few iterations:

Get-EventLog -Log System -EntryType ‘Error’ # Skip the step of piping through the event logs

I tried several other permutations, most of which did nothing. I will make note of one thing here, which I will delve into more in a followup post. That something is the pipe “|” character. It works almost exactly as any windows or unix command line guru would expect. Namely, it “pipes” the output from the program on the left into the input of the program on the right. However, PowerShell is object based, unlike unix, dos and windows shells, which are based on streams of text. Therefore, you can pipe objects as well as strings with PowerShell. When piping to and from cmdlets, you are piping objects. The examples above that use the Where-Object cmdlet makes the implications of this clear.

One thing to note here is speed of return. One would think that the event log is indexed by date, and that I could reduce my query time by only returning recent entries. Searching the whole event log should be expected when using the Where-Object cmdlet, since all log entries are being queried and piped to another program. However, one would think that the Event Log would be indexed and the Get-EventLog command written in such a way that only a subset of the log entries would have to be traversed when you specified the -After parameter. However, when you run both examples, the command “hangs” for a bit between the last row output and the command prompt being displayed.

In my next article we will throw grep and less into the mix, and see what happens when we mix object piping with text piping.

I have a tolerate/hate relationship with Solaris. I’ve played with it occasionally, and I always seem to spend too much time administering Solaris, giving me less time to solve the problem I intended to use Solaris for.

Recently, I was called in to troubleshoot some mod_perl scripts for a client. The troubleshooting was done over email and I never actually had access to the machine the code ran on. I was asked to do some follow-up, and I wanted to more accurately reproduce the client’s environment. So I asked for the specific OS and perl version. As you might have guessed by the title, the perl code was running on Solaris 10.

So I fired up my trusty Dell Studio XPS, googled around, and found that Sun^H^H^HOracle makes a Solaris VirtualBox Appliance. I already use Virtual Box on my laptop so this saved me some time. Installation was simple, but a few things annoyed me about the process.

Virtual Box Guest Additions

I had to manually install the Virtual Box Guest Additions. The main advantage of this package is to allow your guest OS to run in full screen mode, or be resized to an arbitrary resolution. Oracle owns all the code for the Virtual Box and the Guest OS. There is no reason, legal or otherwise, for them to not distribute their Solaris VM appliance with the Guest Additions already installed.

Security Defaults

My biggest beef here is that the Solaris installer asks you for a root password, and does not allow you to make a non-privileged local user. I prefer the “no direct root login” model of OSX and Ubuntu where all administration is done through sudo. However, Solaris does not ship with sudo installed. Instead, Solaris has something called RBAC that is superior to sudo. I look forward to learning more about this if I am forced to deal with Solaris again.

Lack of /root

In Solaris, roots home directory is / as opposed to /root. This means there is a /Desktop. and other folders in the root of the file system. In retrospect I could have avoided their creation by doing a console login as root after installing the appliance, and making a non-privileged user to log into X with. However, those steps could be avoided with one more install screen for adding users to the system.

Desktop Experience

There was some desktop software preinstalled. Most notably Star Office 8, Mozilla 1.7 and Firefox 2.0.0.19. Star Office 8 is the previous version of what is now Oracle Open Office, the “value added” closed sorce version of Open Office. I can understand being a version behind with Star Office, since it was probably the latest version when Solaris 10 was released. Firefox 2 was probably new when Solaris first came out as well, but I don’t see the need for the old Mozilla suite. There was some good news though. Flash was preinstalled.

I went to youtube to test flash. I discovered that audio did not work. The internet told me there was no Virtual Box sound driver for Solaris and I should install Open Sound System. The audio isn’t perfect, but it works. While most people are not running Solaris as a Desktop, it seems odd to distribute a VM appliance with a browser and flash, and not include sound support. Hopefully Oracle will develop their own sound driver or distribute the Open Sound System one in the future.

Package Management

Solaris comes with a package management system. There is a decent amount of software installed by default. However, certain things linux users would expect like vim are not included. As a FreeBSD person I’ve come to expect my OS to come with a vi that is not vim. Luckily, there is a comprehensive third party repository for Solaris at blastwave.org. I followed the directions to install pkgutil, and soon I was using vim.

Service Management

Historically, Solaris has had an rc.d system thats is, in my humble opinion, weirder than the linux and BSD ones. In its defense, it better captures the UNIX “worse is better” Zen simplicity than the linux and BSD init scripts do. However, apparently all good things must be deprecated in the name of progress. While some services still exist in /etc/rc[0-5].d/, many were moved the the Solaris Service Management Facility in Solaris 10. These services are administrered by svcadm and svcs. While I was tweaking the apache config, I became intimately familiar with these commands. Overall I’m quite happy with this innovation. However, I need to develop more competency with it.

Conclusion

Solaris has improved from years past in terms of administrative experience. However it is still as weird as ever. However, its something that has grown on me in years. I acknowledge my conclusion that Solaris is weird is based on the bias that FreeBSD is “normal.”