January 19, 2008

Command line tool vs. PowerShell Cmdlet

I’ve blogged before about the fact that I like command line tools. No quick and dirty no-options hacks, but seriously taken applications. Must be due to the fact that I started in the DOS era. Console applications are powerful, expressive, and above all they can be used to automate tasks. And yet, things have changed. Batches lack a good deal of features, more powerful shells known from UNIX environments never caught on in the Microsoft world, neither did Microsoft Script Host (MSH). Nevertheless there are tools that stepped into the breach, if only for me. ANT, being intended as successor to MAKE, and NANT, today about to be succeeded(?) by MSBuild provided the scripting features I looked for. Those tools however lack universal availability and support and had to fall back on command line utilities for special tasks. But now we have PowerShell that attempts to bring unprecedented scripting support not only to windows, but also (by Microsoft’s commitment) to be the one shell that shall be universally supported by server software for the automation of administrative tasks.

As Bob Muglia (Microsofts Senior Vice President, Windows Server) said on the last PDC: “We are going to undergo a project over the next few years to get a full set of Monad [PowerShell] commands across all of Windows Server, and across all of our server applications. […]“.

So it lingered in my mind to check the differences between command line tools development and PowerShell Cmdlets until an MSDN Magazine article ("Extend Windows PowerShell With Custom Commands") gave the final push. My little command line framework was aimed at making 80% of the demands easy and effortless. Would PowerShell complicate matters? Would it provide additional features I missed so far? Would it be feasible to provide both implementations at once? All that was needed was a real world-test scenario….

I change the screen resolution of my laptop quite often, depending on whether it’s balanced on my knees or placed on the desk. I never found a command line tool to do this, so I always had to go through Display Settings. With Vista this became a nuisance (more clicks, another window to close, …). So this became my test scenario: A tool to query and change the screen resolution.

Preparation

Since I intended to use the logic in different contexts, I separated it into a respective core assembly right from the start. www.pinvoke.net told me the details about changing the screen resolution with .NET, MSDN provided the background. Thus I had the .NET representatives for DEVMODE, ChangeDisplaySettings, and EnumDisplaySettings, as well as a little helper class for standard tasks (like error handling and getting all display settings as a list).

Since I am familiar with it and it defined the reference, I started with the command line version. I’ll point out the important points and provide code snippets only if necessary. For a quick peek in the console framework usage see my older post, the full source code is available for download at the end.

Command Line Tool

The following screen shot shows the calls to the command line (CL) tool:

Using my framework the implementation was a snap. (It ought to, that’s what I wrote it for):

Provide an application class derived from ConsoleApp.

Override ApplyArguments to get the values for setting the resolution.

Override ApplySwitch to handle /query and /queryall.

Override Process to do the work, i.e. switch on the actual job to do and call the respective helper methods in the core implementation.

Provide methods to output the current screen resolution and the list.

Provide a resource file with logo and help information.

Quick and easy, it does the job, and it satisfies my "quality demands".

PowerShell

PowerShell (PS) differs in its philosophy insofar as it demands a separate cmdlet per command, in my case one for setting the resolution, another for querying it. Below is the screen shot for loading the snapin and querying the current resolution:

Here is what I had to do to implement the functionality for PS:

Provide two different cmdlets, derived from PSCmdlet to declare the parameters and to implement the commands. Quite easy for the get, a bit more to do for set.

Provide a format file for the output. Reasonable work and quite flexible.

Provide a help file. Ugly work.

Provide a snapin class derived from PSSnapIn to satisfy the PS infrastructure and run installutil to announce the snapin. No big deal at all.

Apart form being installed, one needs to load the snapin into the shell before using it, as denoted in the above screen shot. Here my assessments on the single tasks:

get Cmdlet and Format File

This is the sweet spot with one small caveat. Rather than printing information like the command line tool, my cmdlet should follow the PS philosophy and produce objects that will then be printed (but might also be piped to the next cmdlet within a script). The DEVMODE struct did not fit too well with that demand, but a simple wrapper class with just the properties needed did the trick.

After that it only needed one simple class containing a property (the only pitfall being it’s data type SwitchParameter) to switch between current and all resolutions, plus a ProcessRecord method that cannot be more simple:

[Parameter(HelpMessage = "provide this parameter to show a complete list of available screen settings; " + "otherwise only the current screen setting is reported.")] public SwitchParameter All { get { return _all; } set { _all = (bool)value; } }

While this would have already been enough (note that I didn’t have to do anything to print the data), adding a format file allowed customization of the output format even using script blocks. Very flexible.

Writing this cmdlet was even more straight forward than writing the CL tool. The later one focuses on keeping similar tasks (like parsing and processing) in one place while the cmdlet keeps related parts together and does a lot more in a declarative fashion.

set Cmdlet

Compared to the get cmdlet, the set cmdlet adds some demands for processing. Providing the input parameters is again done using properties adorned with attributes. Validation beyond the provided attributes would require supporting this infrastructure (more effort than what I can do with the CL tool) or plain code in BeginProcessing, whichsplits the validation. This is especially the case with interdependent property values.

Now for the processing part: While my command line tool simply set the screen resolution, PowerShell Cmdlet Development Guidelines demand that system changing operations follow certain rules: They should support the WhatIf, Confirm, and Force properties. This includes calling ShouldProcess and ShouldContinue within ProcessRecord.

While this adds additional "burden" on the cmdlet developer, it also ensures a consistent user interface for cmdlets. Not a bad thing as long as cmdlet developers consistently follow these guidelines.

Help Support

The help file (XML of course) has been the area of the biggest hurdles. It’s complex, highly redundant (with some parts not even used, according to the documentation), with an intransparent mixture of leveraged schemas, attributes without expected default values… . Writing consistent help files is IMO not feasibly doable in all but the most simple examples; rather help files scream for tool support (hear me, anybody?).

In this case I craved for the simplicity of my command line tool. I also have to say that the this complexity is IMO not justified for just having a help command printing out some information. And the documentation also does not explain the motivation for what I can only see as over-engineering. Only if you think of generating help files similar to the MSDN documentation this would be a reason I could understand. But again, I could imagine a help browser with GUI interface, but it has not been mentioned anywhere, much less implemented or announced (AFAIK).

Snapin and Registration

The snapin is simple enough and registering the cmdlets with PS prior to using them is a reasonable demand.

Final Verdict

So, final verdict. Hmm, let’s see… CL tool vs. PS cmdlet:

Code organization: The CL tool addresses multiple commands and keeps similar functionality (command line parsing, processing, etc.) together. PS uses one cmdlet per command and keeps the command related stuff together. Different approaches but no technical advantages in general for either; it’s just a matter of taste. 0:0

Command line parsing: Cmdlets follow a purely declarative approach that goes further than what I’ve shown here (i.e. parameter sets), the CL tool uses an imperative approach. For simple cases the cmdlet would need less coding, but I’ll still give this point to the command line tool. It’s more flexible and consistent if it comes to parameter validation. With cmdlets, at some point the declarative approach will fail, e.g. if it comes to parameter interdependencies. 1:0

Processing: It was more work to implement the cmdlet logic in the case of the set cmdlet. But this also brought about additional features and compliance with certain patterns. More work but not a bad thing. 1:1

Pipeline support: No question, here the cmdlets play top league. Just for the price of a clean pipeline object (which is neglectable) I get flexible output formatting, object oriented pipelining, etc.. The CL tool cannot compete here, as I have to code the output and still only get "simple output". 0:2

Help support: While the CL help is no doubt quite simple, it is sufficient for my needs. Cmdlet help on the other hand aims to answer more complex needs — by becoming complex^2, addressing unclear demands, and being highly redundant. Thus cmdlets loose by comparison as well as by simple inspection. 2:0

If this were a shootout, it would be close to an even score. But it was not; it was about whether PowerShell cmdlets are a viable alternative to command line tools, and whether dual implementations are a feasible option. Both is the case and since this has been accomplished by plain PowerShell features whereas the command line tool needed a framework, PowerShell actually did very well.

All in all, I’m content. My console framework did well ;-) and even if PowerShell will become the future, there is a clean migration path. I’m ready for the future :-)

Share this:

Like this:

Related

Hi, haven’t used PowerShell extensively, but looked at it. The learning curve seems to be steep in the beginning(?)

Two questions: If you are building apps on .NET, msbuild is mandatory, because VS and – more important – TeamBuild uses it. So we need to put that syntax in our tiny little brains (that is sort of mandatory, and it is perfectly suited for writing little development/deployment related scripts). When I want to extend Teambuild, it will be a decision “can I do this with a msbuild task”, “can I use very simple cmd.exe like scripting, possibly using psexec.exe” or “do I have to use PowerShell (which isn;t in my brain yet)”. And we are devs – we also need to know xslt, probably XQuery (esp SQL Server), maybe Wix for deployment. And as far as I am aware, the PowerShell syntax differs strongly from msbuild and form all of the above.

I simply don’t understand why the syntaxes differ that much (well, different product group, I know; possibly Powershell is targeted at admins more than devs).

Second thing is that PowerShell seems overengineered to me. I mean, get-childitem for listing directories? I always thought IT guys love it concise? I know there is aliases for that, but … When I looked at it for an hour or so it made my brain melt, more than any of the other things noted.

@Brian: Good points and to some degree I share your concerns. However I think that doesn’t help either of us :-)

Yes, the learning curve is steep, because there is so much new to learn. PS scripting is more related to the unix shells than to .cmd or MSH, and it adds some additional concepts like objects on the pipeline. And – as allways – new technologies add to the stack of need-to-know-stuff, which seems to grow endlessly.

Two points though:

1. PS _is_ targeted at administrative tasks. That means, we as developers, are probably not the primary user group to _use_ PS. But as developers we are the ones to _support_ those usage scenarios, as we may have to write the PS Cmdlets to allow administration of your (server) applications. (On the other hand, a build script that not only compiles but also automatically sets up the web server with the latest build should profit from this.)

2. IMO get-childitem _is_ very concise (i.e. accurate and consistent). It is actually well engineered in that it clearly specifies the very simple verb-object syntax to call any method. If that is to much to type, there are aliases to shortcut that. The complexity that at first is no doubt there comes from two facts: One, everything you want to do and know to do in other systems, you have to look up in PS. Second, there is so many objects available, far more than we had before, that the sheer mass takes time to get used to.

All in all I dont’t think that PS will replace MSBuild or any other development tool; but I don’t think that we as developers can afford to ignore PS for long either, once the demand grows (which it will if MS is true to its word). And if a PS script relieves me of the burden of writing MSI packages, I will happily embrace it ;-)