Today I had a discussion with my colleague and he mentioned that the only reason why he still uses PowerGUI is that ISE doesn’t support adding of inputs. So we made a deal – if he will not find it on the internet anywhere, I’ll create it. After few minutes he arrived and asked me to do that

Whole solution consists of two functions:

Add-ISEScriptParameter: It asks you (using Read-Host) for parameters to your script. I had a version with MessageBox but then leave the final one without it.

Start-ISEScriptWithParameter: This one will run the actual script with parameter stored in previous step.

I use simple hash table for storing pair <FileName>=<Parameter> in Global scope. You can have different values for different scripts.

As I mentioned in the previous article working with errors was the biggest pain of most of the entries for the first advanced event.
Firstly the description does not say the errors need to be handled in any way, it only mentions you need to clearly display them, PowerShell does that by default. So if you would not use any error handling you would probably still comply with the requirements.
On the other hand when a standard PowerShell cmdlet fails it stops on the first error for each item, reports the error and jumps to the next item. This is what most of you tried to achieve, but not always successfully. Error records got stripped to just error messages informing user, terminating errors became non-terminating and vice versa.
The core of the task is to move files from one location to another. The Move-Item cmdlet is the most reasonable choice, but it has one limitation that must be taken into account: If the destination path (folder) does not exist the operation ends with a non-terminating exception.
For the sake of simplicity let’s assume the source and destination root paths exist because I tested their availability while validating the parameters. There I also made sure the path is a FileSystem path not a registry or other path (believe or not this is more important than the first check). I also assume the remaining parameters are valid.
The goal is to move as much files as I can to the new location. Freeing as much disk space as I can for Dr. Scripto.

My script is going to be laid out like this:

$sourceItem = #[1] Get items recursively and filter them according to the requirements.
foreach ($currentItem in $sourceItem)
{
#[2] Form the path to the resulting directory.
#[3] If the destination directory doesn't exist, create it.
#[4] Move the current item.
}

I ruled out “path does not exist” type of errors by validating the input, but still I can get access denied error. These errors are originating from the cmdlet and are non-terminating. Unless some core exception occurs I am pretty safe to assume the worst that could happen is ton of AccessDenied errors and empty $sourceItem collection. Putting this into the context of the function: Add as much files as you can to the collection.

I am going to use .Replace() method of the System.String, there are two possible exceptions: ArgumentNullException and ArgumentException. Neither I am likely to hit because I validated my input.

In this step I am going to create the directory for the files if it is not already in place. Item already exists exception is possible here, I will avoid this by testing the path before trying to create the folder. Any other exception coming from the cmdlet I want captured.

Moving the current item may also produce quite a few non-terminating exceptions. If this operation fails I want to remove the previously created folder so I don’t leave any garbage behind.

If any of the last three steps (2, 3 or 4) fails the function should progress to the next item in the list. I am going to force this behavior using ErrorAction on the cmdlets and Try Catch block that will capture only two types of exceptions:

[Management.Automation.MethodInvocationException]

these may raise from .NET calls.

[Management.Automation.ActionPreferenceStopException]

these are raised when cmdlet fails and has the ErrorAction set to Stop.
All these exceptions were originally non-terminating, so after doing the clean-up I write them back to the screen using the Write-Error, keeping all the original information.
The important thing is most of the possible exceptions (like OutOfMemoryException) are still able to pass by unaffected. So if the script fails the terminating exceptions bubble up and possibly terminate the script, the non-terminating exceptions are written to the error stream and the scripts progresses.
Here is the example code:

Here are few tips on scripting I’d like to share. Some of them are inspired by the entries for first Scripting Games event. Some of them are just related.

Before I start the rant please keep in mind I understand criticizing is a lot easier than writing the script and making it awesome. Good luck with the next event!

Variables

Use variables and name them properly. Variables are an easy way to label the data you work with. There are few things to focus on:

Do you use the same data on more than one place? Use variable. This especially holds true for the beginner track. The scripts I saw, often sacrificed the readability and maintainability to make the solution a one liner. Here is a little trick that may help you define a variable and still keep it a one liner: ( $source = ‘C:\temp’ ) | Get-ChildItem or even Get-ChildItem -path ( $source = ‘C:\temp’ ). The $source variable is set and the value is also passed to the next command through pipeline. Bit dirty but still better than nothing.

Does the variable name say enough about the data? Variable names like $errVar, $n, $data, $temp or $creDate do not. Tell me more about what is happening, don’t make me guess.

Style

No aliases.

Full parameter names.

Use standard PowerShell naming convention: Verb-SingularNoun. Run the Get-Verb cmdlet and choose the right verb from the right category. Choosing the “Move” verb for the first event was imho optimal.

Don’t get crazy with the noun part, no Archive-ApplicationLogFileToNetworkShare please.

Try to keep your parameter names as standard as possible. Source, Destination, ComputerName, Name etc. If you are in doubt look at the standard commands and follow their lead.

Formatting

Use whitespace to group related commands.

Break lines after pipeline. There is no need for ` in this case.

You won’t get many points for speed. Read the script few more times. Ask your peers for a review to make the script as easy to understand as possible.

Conditions

Please do not use too many nested if conditions. I consider three a good maximum. More than that and you make me struggle to understand what is going on.

The correct path should be easy to follow. Put the “everything is going OK” path in the “if” part and the errorneous part in the “else” part.

Don’t use the “$something -eq $true” or “$something -eq ‘True’”. You may slip and write “Treu” instead, resulting in an unnecessary error. Use just “if ($something) {}” instead.

On the other hand if you are testing if something is zero use “-eq 0″.

I know using “!” instead of “-not” is possible but I like the latter better.

Comments

Before commenting anything consider if you should comment or if you should use the Write-Verbose

cmdlet instead. I personally use comments to mention the reason why I do the stuff the way I do it. And the Write-Verbose to inform the user about what is happening.

I did not see any usage of Write-Debug. And that is maybe for the better, maybe little too much detail for the event posts.

Inline help

On the advanced track the inline help is kinda mandatory. But if you choose between perfecting the code and perfecting the inline help please choose the code.

Input validation

There is no need to validate an integer parameter for range that is 0 to [int]::MaxValue. It is done by default.

The validating script “Test-Path -Path $_” is a clever way of validating the input path, but keep in mind it will return true even if you provide registry path for example.

Error handling

Most of the people struggled with this.

Forcing stop ErrorAction on a cmdlet, capturing the exception in the catch block and then writing just its message is not the way to do it.

I saw a lot of comments saying there is no error handling because there was no try catch block. I don’t think this is the correct way to look at error handling.

Talking about geeky ways to do stuff with Jaap Brasser I’ve stumbled upon this gif, where someone creates what I suppose is C code using MsPaint. Pretty awesome stuff, even if the code won’t compile afterwards because of the bmp header. Wondering how some PowerShell code would look like as a bmp we attempted to write a function to automate the process, and here is my take on it:

Here is an example of “Hello World!” script converted to bmp:
The image is of course zoomed out, here is link to the original.

The logic behind the code is pretty simple. 54 byte long file header and appropriate file extension is what makes a file a bmp file. The header is represented as an array of byte values in the code. The header contains information about the file contents, most importantly its width (offset 18) and height (offset 22). If these values are set too low not all data in the file are visible. If they are too high the file comes out corrupted so I need to calculate the exact value. The header is followed by image data (4byte groups) created from byte values of the letters. All of this is output to a file as byte data and named appropriately.

There are few things I did not take into account, I blindly assume the input values are correct and the input file is utf-8 encoded.

It turned out my logic behing the first version was failed and it in fact truncates some of the data so here is updated version that tries to fit the data to as much square area as it can.

Nowadays I am trying to implement something that includes lot of P/Invoke calls, and several DWORD flags are included in the calls. The DWORD is unsigned 32-bit integer and is usually represented like hexadecimal number – 0x000000ff. At first it all seemed peachy until it turned out that the standard PowerShell converts the hexadecimal representation of number to integer (values less than 0 are possible) and not the unsigned integer (negative numbers are not possible). Let’s see an example:

Here is the maximum value of the DWORD represented in hexadecimal format 0xffffffff. I expect it to be 4294967295 (the same as the maximum value of unsigned 32-bit integer [uint32]::MaxValue) but running it in PowerShell console returns -1, a value clearly not belonging to scope of the UInt32 type.

I realized I need to do the conversion myself and found it can be done easily using static methods of the System.Convert class. In the example I specify a hexadecimal representation of the number as a string and the base of the hexadecimal number which is 16.

[Convert]::ToUInt32("0xffffffff",16)

Note: Changing the base value to 2 enables you to work with binary numbers, changing it to 8 enables you to work with octal number, changing it to 10 makes it work with decimal numbers (default).

This is easy, but inconvenient and error-prone so I created function called ConvertFrom-Dword that validates the input values for me and returns the number as unsigned integer:

This is easier but still not perfect, but for the time being it will do.

In real life we can for example take the CreateFile function documented here. By the name of the function you would probably say it is used just to create files, but it is also able to open existing file. When you perform this action you can decide if you need the file opened exclusively (lock it) or if you want to share access to the file with other processes. For that purpose there is parameter dwShareMode you can set by specifying correct “flags” (DWORD values).

The description on the linked page says:

The requested sharing mode of the file or device, which can be read, write, both, delete, all of these, or none (refer to the following table).

0
0×00000000

FILE_SHARE_DELETE
0×00000004

FILE_SHARE_READ
0×00000001

FILE_SHARE_WRITE
0×00000002

If you read carefully you probably wonder how you specify the “both” and “all of them” cases if there are no such items in the table. The solution is connecting them by the “-bor” (bit OR) operator. If you want to share both write and read you do: 0×00000001 –bor 0×00000002. Which returns 3 or 0×00000003 in case you use this little trick.

The operation –bor did, may look like a simple adding but it is not because 1 –bor 1 = 1, instead it does operation on binary representation of the number. Look at the items in the table again. Why is 0×00000003 missing?
Now I see I need some more instruments to work easily with this type of data, back to console…

Recently I have come across Microsoft Team Foundation Server Service preview. The service is currently offered for free and supports both TFS and Git version control systems. So I have signed up The only thing that reminded to make use of it was to connect PowerGUI to the service which turned out to be quite trivial. So I will show you how to do step by step:
Some basic documentation on connecting PowerGUI to TFS is provided on the PowerGUI Wiki. To be able to connect to the TFS Service you need newer MSSCCI Provider installed. Both 32-bit and 64-bit version are available on Microsoft pages, but my PowerGUI x64 seems to detect only the 32-bit version.
When you have downloaded and installed the client you need to restart your PowerGUI editor and go to Tools > Options… > Version Control and select Team foundation Server MSSCCI provider as your current provider.

Then, in the main editor window, go to new Version Control menu and choose Get files from Version Control. New windows appears, where you can choose which provider you want to use. You probably have none there yet so you need to add one by clicking Servers… > Add… This will get you to this menu:
Here you have to specify URL to your TFSS account followed by “/DefaultCollection”. Click OK and you should be prompted to login to your Microsoft account:
Logging in takes few moments and then you are hopefully presented with this window:

Just click Cancel to return to the selector:
Click OK here and you are good to go.

In the next step you need to choose local folder to store the data and also choose server path to the project you work on.

Next you need to choose file you will edit, I have chosen to name it hello.ps1.

If you are yet to create one go to your browser, navigate to your TFSS web page and create new Team project.

Now you have your file opened and you can CheckOut, make your changes and CheckIn.

One of the things that I always wondered about, is how you find out if strict mode is set in PowerShell session. I have put the problem aside until answer was posted to the Powershell.com forum. Finding out where and how the value is stored was the easy part. I just opened the .NET Reflector and looked at the definition of the SetStrictModeCommand. Getting the value from PowerShell session was much harder. Few days of casual digging and it turns out the answer can be retrieved using Reflection. As I am new to the whole Reflection thing it was pretty hard to get the basic principles but fortunately this POSHCode article came to rescue. It contains basically all I needed to extract the info.

The code is not perfect because only the state from global scope is retrieved. Ideally it should be retrieved from the current scope, but unfortunately I am unable to do so (Suggestions please.). In the current state the strict mode appears as not set unless you set it in to the global scope. This returns incorrect results:

Playing around with exceptions I got really tired of walking up the type tree using .getType().BaseType.GetType().fullname so I decided to create a simple Resolve-Type function. The functions output is not very well formatted, I may improve on this later.

Lurking through the PowerShell inner workings facing problem that you have no clue what caused is not a rare occasion. Tracing the command/expression usually gives you at least a peek on the problem’s domain but there is one slight problem with the Trace-Command cmdlet. The Name parameter produces wast amount of output when used with asterisk wildcard. In this article I used it with success because I was pretty sure what I was looking for. Going through pages of debug output is not my favorite thing to do, so I looked what trace sources produce the most of the amount with the less usefull information and narrowed it down to the ConsoleHost provider and sometimes the Type* providers. Now only if there was a way to get all the sources excluding the listed ones. Looking at the definition of the Name parametes on the Trace-Command cmdlet I see it accepts array of strings as input. From now it is easy: