“Wow, this is great!”, you say. On the initial announcement, yeah, this seems like a great idea, something developers have been clamoring for for YEARS. No more Cygwin, no more crazy emulators, native everything Linux on a Windows box!

But, as Charles Dickens wrote in the tale of two cities, ‘It was the best of times, it was the worst of times.”

After getting over my initial giddiness of these VERY cool announcements, the question that came to mind is ‘WHY!?!?’ Why is Microsoft doing this? This isn’t just a ‘hey, lets do some cool experiments to get developers back’. These are very serious investments that are not being done on a lark.

First off, the Linux love affair seems to have started with Satya Nadella. I think under Steve Balmer, the words Linux, Unix, and OS X (and iOS) were banned. Microsoft had not successfully done cross platform software since the early days of Excel for the Mac, and porting it to Windows. Windows for Alpha, and Windows for Itanium never took off. Office for Mac was a red-headed step child. Heck, even getting Windows Mobile to run on the phones proved to be a huge challenge.

Since Mr. Nadella took charge, Linux has not only been unbanned, but it has completely been embraced. It started with the Mac and either Silverlight or Office. Since OS X is based upon BSD Unix, Microsoft had to come up with tools to allow them to develop for the Mac. Microsoft started out slow, but lately, they have been able to bring a pretty good parity to Office for the Mac. In doing so, they have build up some better understanding of developing for Unix, and are now applying that to Linux.

The one thing that worries me is that Microsoft seems to be developing somewhat of an inferiority complex. Microsoft under Bill Gates and Steve Balmer would have always been ‘we think ours is better, deal with it’. They would be almost as arrogant as Steve Jobs. The new Microsoft is almost apologetic. ‘Hey, we want to be where the cool stuff is, and we realize our stuff isn’t cool’. Which is sad, considering that the latest Visual Studio is awesome, Powershell is cool, and Windows 10 is probably the best OS they’ve ever done.

Am I excited by what Microsoft is producing? Sure! I love the fact that my skill set will start to be more cross-platform. I just want to know ‘why’. Yes, I understand this will help Azure, and that is where the future of Microsoft probably is. But, this seems like a LOT of resources are being poured into this Linux initiative, and there doesn’t seem like a way that Microsoft will make money. Microsoft is not Google, where they only play with cool.

Trends tend to go in cycles. With computers, we’ve seen the old become the new back to the old, and back to the new. Take the transformation from mainframes to PCs back to big servers. Same thing has happened with computer languages. One of the most notable things has been the rebirth of the command prompt… In the dark ages, mainframe / Unix admins used the command prompt to manage the system because nothing else existed. Then came the GUI tool. Wonderful! Visual! Easy to see, easy to learn. Unfortunately, difficult to automate. For one or two servers, that wasn’t a problem. As tools like Amazon’s Cloud computing, and Microsoft’s Windows Azure become more prevalent, though, managing by GUI becomes very difficult. For Unix / Linux systems, there is a very rich scripting ecosystem. But what about for Windows systems?

PowerShell started as a way manage systems via the command line using .NET, and has expanded to encompass everything from Exchange to SQL Server to Windows Azure. The current version of PowerShell is vastly different from the original v1.0, with almost all commands being pipe-able and reusable. The new remoting features are amazing. There is even a built in web site to allow accessing the command line via a web server. Yes, Unix/Linux admins will go ‘so what, this has all been done with Unix before’. This is VERY true. It’s just nice to have this built into Windows as a native component rather than as a third party add-in. Also, PowerShell is built around .NET with all of the latest and greatest concepts such as fully dynamic objects and duck typing.

Having said all that, PowerShell is one of my top things ‘to learn’. After checking out the initial Jump Start Course on the Microsoft Virtual Academy, I’ve been trying to ‘live in PowerShell’. It certainly makes sense when working with multiple machines. The remoting alone makes managing multiple web servers in a farm easier.

First off, there is no UnmapNetworkDrive method on the WScript.Network object. I don’t know where I got that from. I ended up finding the RemoveNetworkDrive method. The only problem with that method is it will throw an exception if the drive letter doesn’t exist. So, I ended up searching the PSDrives to see if the mapped drive already exists before unmapping it. The last trick was to figure out if the resulting object is null. To see if an object is null, all one has to do is do

Powershell is a VERY cool scripting language from Microsoft. I’ve used it to build some of our internal web site deployment scripts. Unfortunately, our MAIN web site is not accessible through the normal shares. To get to the server, one must access the computer using the UNC name of something like:

\123.123.123.123ShareName

Powershell does NOT like trying to connect to a non-authenticated IP on the command line. It keeps complaining that the location cannot be found. Set-Location doesn’t work, as the FileSystem provider doesn’t accept the –Credential parameter.

Fortunately, a solution DOES exist. Thanks to this post from eggheadcafe.com, I was able to build up a script that works to map a network drive to the remote computer

Replace newDriveLetter with whatever drive letter you need. Also, replace the IP and share name with whatever location you are trying to connect to. And finally, replace the domainuser and password with whatever the user name and password is.

Jeffrey Snover just posted a little update on the ‘state of Powershell’. It’s amazing that they’ve hit (or are just about to hit) <spoken in a Dr. Evil voice> ‘one million downloads’. Mwah-ha-ha-ha… Oops, sorry there, got a little carried away…

I just want to add my $.02 worth. Powershell is one of the pillars that is the future of Microsoft. I think it’s going to be one of those ‘must learn’ technologies that define the industry. I may be wrong here, but I don’t think there has been anything built like it. Powershell has the ability to interface with ANYTHING in the .NET library. ANYTHING! And, it treats objects as objects. No more reading from a text line and parsing it for data. How cool is that?

Do you hate the fact that the cursor ends up at the end of the path in the PowerShell shell? Here’s a trick to make the prompt put a newline at the end of the path so that the caret always starts on the next line.

First, in your ‘My Documents’ folder create a directory called WindowsPowerShell, if it doesn’t already exist. In the My DocumentsWindowsPowerShell directory, create a file called profile.ps1. You can use this file to execute code when the shell starts up. Edit the profile.ps1 file and add the following line to it:

function global:prompt {(get-location).path + "`n>"}

See, PowerShell allows you to link functions into the system. Since there is no ‘Prompt’ command, you just have to override the global prompt function. To put a new line in the text, we use `n as the escape character sequence.

There’s a new Release Candidate out for Microsoft Windows Powershell. RC2 seems very close to shipping, it runs faster, and has some nice features baked in (WMI and ADSI are now included). As with each release, there have been numerous changes. One change that is rather important, but somehow missed in the notes is the change to the profiles directory. The profiles directory used to be Documents and SettingsusernameDocumentsPSConfiguration. That has changed to Documents and SettingsusernameDocumentsWindowPowerShell. It’s a good change, I totally agree with it, but *please* document it!

BTW, if you are wondering how I found the new profile location, at a PowerShell prompt, I typed $profile. This gives you the location of the profile used in starting the session.

I’ve talked a little about Microsoft’s new command shell, PowerShell. I’ve been using it for my day-to-day tasks at work. There are some really great features for scripting and programming. There are some bang on useful commands that will help with working with PowerShell. Here’s a couple of things that will help with the *discovery* of PowerShell:

First up, get-member. This nifty little command is used in conjunction with a pipe from another command. What get-member does is list the members of the object(s) passed along the pipe from a previous command. Woohoo! RTTI for scripts!

Try this example:

PS C:> get-childitem | get-member

This will list all of the methods and properties of the object on get-childitem (aka dir or ls).

This leads me to the next darned useful thing, where. Now that we know what can be accessed, the where command allows us to use it.

Try this example:

PS C:> get-childitem | where { $_.PSIsContainer }

This will list all of the containers of the given location. It’s a very cool way to get a list of subdirectories for a directory. Also in this example, notice the $_. That syntax allow means ‘the current object’. You can reference any property from

The last idea to leave with is the format- commands like format-table and format-list. These allow you to build your own output formats for any command that emits an object.

Try this example:

PS C:> get-childitem | format-table Name, Length, PSIsContainer

This will take the get-childitem command’s output and only list the Name, File Length, and whether the child item is a container.

Can we say, Schweeeeet!?!

PowerShell’s name is appropriate, although *very* goofy. I haven’t even touched the surface in my own personal uses, and I can see that this is the thing to learn.