One of the great things about doing Office automation (that is, COM automation of Office apps) is that all of the examples are filled with tons of references to constants. A goal of VisioBot3000 was to make using those constants as easy as possible.

I mentioned the issue of having so many constants to deal with in a post over 18 months ago, but haven’t ever gotten around to showing how VisioBot3000 gives you access to some (most?) of the Visio constants.

First, here’s a snippet of code from that post:

$connector.CellsSRC(1,23,10) = 16
$connector.CellsSRC(1,23,19) = 1

which includes the following constants (except that it uses the values rather than the names):

visSectionObject = 1

visRowShapeLayout = 23

visSLORouteStyle = 10

visLORouteCenterToCenter = 16

visSLOLineRouteExt = 19

visLORouteExtStraight = 1

VisioBot3000 Constants Take 1

So, the straight-forward thing to do would be to define a variable for each of the constants like this:

That looks a lot less cryptic, or at least the constant names go a long way towards helping explain what’s going on.

Even better, if we had all of the Visio constants defined this way it would make translating example code (and recorded macro code) a lot easier.

Wait…there’s a problem.

There are over 2000 constants defined. Guess how long it takes PowerShell to parse and execute over 2000 assignment statements? Too long, as in several seconds.

VisioBot3000 Constants Take 2

\
So, my original approach was to take a list of Visio constants from a web page which doesn’t exist anymore (on PowerShell.com, redirects to an Idera site now). Some friendly PowerShell person had gone to the trouble of creating PowerShell assignment statements for each of the Visio constants, at least up to a point. I ended up adding a couple of dozen, but that’s a drop in the bucket. There were a whopping 2637 assignment statements.

When I added that code to the VisioBot3000 module, importing the module now took a noticeable amount of time. Another fun issue is that we just added 2637 variables to the session, which causes some fun with tab completion (think $vis….waiting…waiting).

Since that wasn’t a really good solution, I thought about what would be better.

My first thought would be an enumeration (enum in PowerShell). I thought briefly about lots of enumerations (one for each “set” of constants), but quickly discarded that idea as unusable.

A single enumeration would be an ok solution, but it would mean that VisioBot3000 only worked on PowerShell 5.0 or above. No dice (at least at this point).

I decided instead to create a single object called $Vis that had each constant as a property.

To do that, I needed a hashtable with 2637 members. I ended up using an ordered hashtable, but that doesn’t really change much.

The code in question is in the VisioConstants.ps1 file in VisioBot3000. It looks something like this:

It turns out that the code runs quite a bit faster than the individual assignments and now there’s only one variable exposed. Intellisense is a bit slow to load the first time you do $Vis., but it’s pretty quick after that.

As the first installment in this series, I want to go back to the topic I wrote on in my very first blog post back in 2009. In that post, I talked about why PowerShell (1.0) was something that I was interested enough in to start blogging.

Many of the points I mentioned there are still relevant, so I’ll repeat them now. Here are some of the things that made PowerShell awesome to me in 2009:

Ability to work with multiple technologies in a seamless fashion (.NET, WMI, AD, COM)

Great variety of delivered cmdlets, even greater variety of community cmdlets and scripts

On a similar note, a fantastic community that shares results and research

Extensible type system

Everything is an object

Powerful (free) tools like PowerGUI, PSCX, PowerShell WMI Explorer, PowerTab, PrimalForms Community Edition, and many, many more. (ok…I don’t use any of these anymore)

Easy embedding in .NET apps including custom hosts.

The most stable, well-thought out version 1.0 product I’ve ever seen MicroSoft produce.

An extremely involved, encouraging community..

Of those things, the only ones that aren’t very relevant are the “free tools” (those tools aren’t relevant, but there are a lot of other new, free ones), and the 1.0 comment.

Since it’s been almost 11 years now, instead of talking about 1.0, let’s talk about now.

Microsoft, has placed PowerShell at the focus of its automation strategy. Instead of being an powerful tool which has a passionate community, it now is a central tool behind nearly everything that is managed on the Windows platform. And given the imminent release of PowerShell core, it will soon be (officially) available on OSX and Linux to provide some cross-platform functionality for those who want it. In 2009 you could leverage PowerShell to get more stuff done. Now, in 2017 you can’t get much done without touching PowerShell.

Finally, PowerShell is a part of so many solutions now, including most (all?) of the management UIs and APIs coming out of Microsoft in the last several years. Microsoft is relying on PowerShell to be a significant part of their products. Other companies are doing the same, delivering PowerShell modules along with their products. They do this because it is a proven system for powerful automation.

Why PowerShell? Because it’s awesome.

Why PowerShell? Because it’s everywhere.

Why PowerShell? Because it’s proven.

And my final point, which hasn’t changed since I talked about it in 2009 is that PowerShell is fun!

Are you looking to start your PowerShell learning journey? Maybe you have already started and are looking for pointers. Perhaps you’ve got quite a bit of experience and you just want to fill in some gaps.

I recently found myself translating some C# code into PowerShell. If you’ve done this, you know that most of it is really routine. Change the order of some things, change the operators, drop the semicolons.

In a few places you have to do some adjusting, like changing using scopes into try/finally with .Dispose() in the finally.

But all of that is pretty straightforward.

Then I ran into a method that wasn’t showing up in the tab-completion. I hit the dot, and it wasn’t in the list.

I had found….and extension method!

Extension Methods

In C# (and other managed languages, I guess), an extension method is a static method of a class whose first parameter is declared with the keyword this.

In my last post I showed several instances of the syntax help that you get when you use get-help or -? with a cmdlet.

For instance:

This help is showing how the different parameters can be used when calling the cmdlet.

If you’ve never paid any attention to these, the notation can be difficult to work out. Fortunately, it’s not that hard. There are only NNN different possibilities. In the following, I will be referring to a parameter called Foo, of type [BAR].

An optional parameter that can be used by position or name:

[[-Foo] <Bar>]

An optional parameter that can only be used by name:

[-Foo <bar>]

A required parameter that can be used by position or name:

[-Foo] <Bar>

An optional parameter that can be used only by name:

-Foo <Bar>

A switch parameter (switches are always optional and can only be used by name)

[-Foo]
[-Foo <Switchparameter>] #odd, but you may see this in the help sometimes

So, in the example above we see that we have

parm1, which is a parameter of type Object (i.e. no type specified), is optional and can be used by name or position

parm2, which is a parameter of type Object, is optional and can only be used by name

parm3, which is a parameter of type Object, is optional and can only be used by name

parm4, which is a parameter of type Object, is optional and can only be used by name

With some practice, you will be reading more complex syntax examples like a pro.

Positional Parameters

Whether you know it or not, if you’ve used PowerShell, you’ve used positional parameters. In the following command the argument (c:\temp) is passed to the -Path parameter by position.

cd c:\temp

The other option for passing a parameter would be to pass it by name like this:

cd -path c:\temp

It makes sense for some commands to allow you to pass things by position rather than by name, especially in cases where there would be little confusion if the names of the parameters are left out (as in this example).

In this parameter declaration, we’ve explicitly assigned positions to the first four parameters, in order.

Why is that confusing? Well, by default, all parameters are available by position and the default order is the order the parameters are defined. So assigning the Position like this makes no difference (or sense, for that matter).

It gets worse!

Even worse than being completely unnecessary, I would argue that specifying positions like this is a bad practice.

One “best practice” in PowerShell is that you should (almost) always use named parameters. The reason is simple. It makes your intention clear. You intend to bind these arguments (values) to these specific parameters.

By specifying positions for all four parameters (or not specifying any) you’re encouraging the user of your cmdlet to write code that goes against best practice.

What should I do?

According to the help (about_Functions_CmdletBindingAttribute), you should use the PositionalBinding optional argument to the CmdletBinding() attribute, and set it to $false. That will cause all parameters to default to not be allowed by position. Then, you can specify the Position for any (hopefully only one or two) parameters you wish to be used by position.

Looking at the help for this function we see that this is true:
Because parm1 is in brackets ([-parm1]) we know that that parameter name can be omitted. The other parameter names are not bracketed (although the entire parameters/arguments are), so they are only available by name.

But wait, it gets easier

Even though the help says that all parameters are positional by default, it turns out that using Position on one parameter means that you have to use it on any parameters you want to be accessed by position.

For instance, in this version of the function I haven’t specified PositionalBinding=$False in the CmdletBinding attribute, but only the first parameter is available by position.

That’s interesting to me, as it seems to contradict what’s in the help. Specifically, the help says that all parameters are positional. It then says that in order to disable this default, you should use the PositionalBinding parameter. This shows that you don’t need to do that, unless you don’t want any positional parameters.

As a final example, just to make sure we understand how the Position value is used, consider the following function and syntax help:

By including Position on 2 of the parameters, we’ve ensured that the other two parameters are only available by name. Also, the assigned positions differ from the order that the parameters are defined in the function, and that is reflected in the syntax help.

I don’t think about parameter position a lot, but to write “professional” cmdlets, it is one of the things to consider.

I’ve been using PowerShell for about 10 years now. Some might think that 10 years makes me an expert. I know that it really means I have more opportunities to learn. One thing that has occurred to me in the last 4 or 5 months is that I’ve been missing the point with PowerShell error handling.

PowerShell Error Handling 101

First, PowerShell has try/catch/finally, like most imperative languages have in the last 15 years or so. At first glance, there’s not much to see. I usually give an example that looks something like this:

Most people are surprised to see red error text on the screen and the nice message nowhere to be found.

Anyone with much experience with PowerShell knows that some (most?) PowerShell cmdlets output error records (not exceptions) in some cases, and that try/catch doesn’t “catch” these error records. In PowerShell parlance, exceptions are terminating errors, and error records are non-terminating errors.

My explanation for the why the PowerShell team created non-terminating errors is this:
Imagine you managed a farm of 1000 computers. What would be the odds of all 1000 of them responding correctly to a get-wmiobject call? If anyone in the class optimistically says anything other than “slim to none”, up the number to 10,000 and repeat.

With standard “programming semantics” (i.e. exceptions, terminating errors), a call to 1000 computers which failed on any of them would immediately throw an exception and leave the try block. At that point, all positive results are lost.

As a datacenter manager, is that how you want your automation engine to work? I don’t think so.

With non-terminating errors, the correct results are returned from the cmdlet and error records are output to the error stream. The error stream can be inspected to see what went wrong, and you still get the output.

Where I missed the point

What I’ve been teaching (and I’m not alone) is that the solution is to use the -ErrorAction common parameter to cause the non-terminating error to be a terminating error. That means that we can use try/catch, but it also means that we need to introduce a loop.

An aside

If you ever find yourself writing code that sidesteps something that the PowerShell team put in place, you should take a step back and see if you’re doing the right thing. The PowerShell team is really, really smart, so if you’re working around them, you probably missed the point (like I did).

Why this is missing the point

One thing that people often miss about PowerShell cmdlets is how often they let you pass lists as arguments. The -ComputerName parameter is one such place. By passing a list of computers to Get-WMIObject, you let PowerShell execute the command against all of those computers “at the same time”. There is overhead, and it’s not multi-threaded, but since most of the work is being done on other machines, you really do get a huge performance increase. It might take five times longer to hit 100 machines than a single machine, but it won’t be anything like 100 times slower.

By introducing a loop, we’ve guaranteed that the time it takes will be at least 100 times as long, because each cmdlet execution is being done in sequence. Using an array (or list) as the argument would allow most of the work to be done more or less in parallel.

That’s not to mention the fact that now we’ve taken on the responsibility of adding the individual results into a collection. Not a big deal, but anywhere you write more code is a place to have more bugs.

So what’s the right way to do this?

In my opinion, a much better way to do this kind of activity would be to continue to pass the list, but use the -ErrorAction and -ErrorVariable parameters in conjunction to get the best of both worlds. It would look something like this:

With this construction, we’re only calling get-wmiobject once, so we get the speed of parallel execution. By using -ErrorAction SilentlyContinue, we won’t have any error records (non-terminating errors) written to the error stream. That means, no red text in our output. By the way, SilentlyContinue will write the error records to the $Error automatic variable. If you don’t want that, you can use -ErrorAction Ignore instead.

The “key” to making this technique work is -ErrorVariable Problems. This collects all of the non-terminating errors output by the command, and puts them in the variable $Problems (remember to leave the $ off when using -ErrorVariable). Since I have those in a variable, I can loop through them after I get the results and do whatever I need to with them.

Finally (no pun intended), I put the cmdlet call in a try/catch in case it throws an exception (for instance, out of memory).

So, to summarize, I get the speed of only calling the cmdlet once, and I also get to do something with the errors on an individual basis.

I’m sure someone in the community is teaching this pattern, but I don’t remember seeing it.

What do you think?

–Mike

Why WMI instead of CIM?

I use Get-WMIObject (even though WMI cmldets are deprecated) for a couple of reasons. First, my work environment doesn’t have WinRM enabled on our laptops by default. I teach error handling before remoting, so at this point using CIM cmdlets with -ComputerName causes errors that are even harder to explain. Also, my first memorable exposure to non-terminating errors was with WMI cmdlets.

One unfortunate problem with using WMI cmdlets is that the error records that it emits do not contain the offending computername. I filed an issue in the appropriate place, but was told that it was too late for WMI cmdlets. Once the general principle of non-terminating errors is understood, substituting CIM cmdlets is an easy sell. Also, it’s a good reason for people to make the switch.

I also spoke at the St. Louis PSUG in August (on Error Handling). Ken Maglio spoke in September on accessing Web services (especially RESTful services)

I was privileged to speak at the Springfield .NET UG last week and gave a “developer’s overview of PowerShell”. BTW, it’s hard for me to try to sum up PowerShell and only talk for an hour. Had a great time, though.

This week, I (finally) hit 10,000 points on StackOverflow. On some level, I know it’s just fake internet points, but it’s a nice milestone.

Like everyone I know in IT, I often find useful answers to questions I have on StackOverflow. Since there are so many answered questions on that site, I generally don’t even need to ask the question, just search for it instead.

When I talk to people about StackOverflow, I always mention the awesome PowerShell presence there. Usually, if you ask a “good” question, you will have lots of people competing to quickly provide answers that are not only correct, but are also informative and helpful. I’m constantly amazed by the character of the PowerShell community. We’re all about getting things done and sharing what we use to succeed with others. I’m proud to be a part of this wonderful community.

And that brings me to the part of this “celebration” that isn’t fake.

In addition to this number:

You will also see this statistic:

That means (by StackOverflow’s calculations, at least), that almost a million people have viewed my answers (and questions). That’s a bit overwhelming. I can’t tell you how many times I talk to people about PowerShell and they tell me that they’ve used one of my answers. A million people, though is more than I can fathom.

P.S. When I speak about StackOverflow, I also mean to include ServerFault, which is the sysadmin-oriented site in the same family. PowerShell questions pop up on both, but more often on the significantly more popular StackOverflow.