Building Unit Tests for PowerShell with Pester

Chris Wahl · Posted on2016-03-302020-05-07

I’ll be the first to admit that unit testing code written in PowerShell was a foreign concept to me. I mostly write one-off scripts that get a job done, then only re-visit that code when necessary. The work myself and others have put into the Rubrik PowerShell Module, however, has changed that model a tad. This is because it’s a project I plan to continue to evolve over time as the product stack grows in features and functionality. Plus, I’m having a lot of fun with it.

Back to the idea of unit testing. In simple terms, it’s really about making sure that the code will perform as expected. Each time a change is made to a function or script, I need to know that the outputs remain constant. While it’s easy to check this on a case-by-case basis, it becomes nigh impossible to do at scale. Plus, testing by hand is error prone and subject to the conditions provided by whatever workstation I happen to be using. Unit testing allows for a set of expectations to be tested in a programmatic fashion. Failing tests give an easy indicator that something is wrong: either the code is erroneous or the endpoint communication workflow has changed.

This is what I imagine unit testing looks like in the wild

Pester is a project that focuses on creating tests for PowerShell in the Behavior-Driven Development (BDD) style. It’s written to create a function and related tests for that function at the same time. The premise is that you write tests that fail and continue to build code until the tests pass. I skipped over that, however, as I already had code written that I wanted to test (as I’m sure many of us already do). Note that Pester uses a slightly different syntax than what you may be familiar with using for normal PowerShell scripts.

Example Test Code

Let’s dig into some example code. One of my cmdlets in the Rubrik PowerShell Module is used to connect into the Rubrik cluster. Thus, I’ve written two tests in Pester to validate that the cmdlet is working. The first test pings a test cluster just to validate that the testing platform can reach the endpoint (otherwise all other tests will fail). The second test connects to the cluster and validates that a token was received. If the token value is null (doesn’t exist), then I know the connection was not made.

The Describe section is used to describe a block of tests that will be run. Then, each test begins with It and follows with a -test {code} to determine the results. The results are compared to the Should Be section using a pipe, sort of like a fancy -eq. It seems complex, but it’s really just saying “Run this bit of code and compare the results to what we are expecting to see.” If the results do not match expectations, the test fails. Otherwise, the test passes. Simple.

Running Pester Tests

Once a test is created, running them just requires using Invoke-Pester somewhere within the folder structure of your code. The cmdlet will recursively search through the folder hierarchy to find tests that follow the *.Tests.ps1 format. Below I’ve run the Invoke-Pester cmdlet at the base of the module folder and again in the Tests folder. Note that my Pester test script is named Connect-Rubrik.Tests.ps1 to mirror the cmdlet that I’m testing.

You can also get granular with the Invoke-Pester cmdlet and call specific tests, if desired. Additionally, you could create one large test file and test all of the code. I am leaning towards modular test files to allow for easier collaboration on the tests.

In a future post, I’ll go further with BDD and highlight how I’m using AppVeyor to build a test environment that automatically checks code submitted to GitHub.