After setting up the context in the previous post, it is time to look at how the authoring workflow looks like when using Pester for writing operations validation tests, to begin with, and then leveraging PSRemotely DSL to target it to the remote nodes.
This workflow consists of below stages:

Getting your tests ready, target/test a single node.

Prepare configuration data (abstract hardcoded values).

Use PSRemotely for remote operations validation.

Debugging failures.

Reporting.

Note – Stages 1-3 will be covered in this post and there will be another post on Stages 4 & 5.

Since PSRemotely was born out of needs for validating an engineered solution, it excels at validating solutions e.g. where the nodes are consistent in behavior and have to be tested for the similar configurations.

Ideally, the operations validation should run after each step to validate that the entire solution is being configured as per the best practice. To keep today’s post simple, we will be validating only the first step which deploying Windows server but the similar steps apply while authoring the validation tests for the other stages in the deployment workflow.

Now take a look at the referenced link and gather the list of configurations that need to be in place on each node as per the step 1.

Deploy Windows Server 2016.

Verify the domain account is a member of the local administrator group.

So now we have the configurations we need to check on each node just before we configure networking on top of them. You can follow the commits on this branch on this test repository to see the changes made as part of the authoring workflow.

Stage 1 – Get your tests ready.

This stage consists of authoring tests using Pester/PoshSpec for operations validations.
Let us start by translating the above gathered configurations in Pester Describe blocks independently, to begin with.

Below is a very crude way that can be used to determine that Windows Server 2016 is installed on the node. There are two Pester assertions –. First one asserts that OS type is a server and the OS SKU is either datacenter edition with GUI or server core.

Here is another independent test for validating that the domain account is a member of the local administrators group on a node.

# Validate that the domain account is part of the local administrators group on a node.
Describe "Domain account is local administrator validation" {
$LocalGroupMember = Get-LocalGroupMember -Group Administrators -Member "S2DClusterAdmin" -ErrorAction SilentlyContinue
It "Should be member of local admins group" {
$LocalGroupMember | Should NOT BeNullOrEmpty
}
}

Stage 2 – Prepare node configuration data

If you look at the authored Pester describe blocks to validate the configuration on the nodes, it might use environment specific data hard coded into the tests e.g. domain username in above example.

So we need to now collect all this data which is environment specific and decouple it from our tests.
Start with the below empty configuration data (place it in the EnvironmentConfigData.psd1 file) and start populating it (it follows the DSC style configuration data syntax).

Start by placing them inside the node configuration data with a general thumb rule of mapping common data to common node information hashtable and node specific details to node configuration hashtable.

Now in the previous tests, the only input is the domain user name. So we can add that to common node information hashtable, since the domain user is a member of the local administrators group needs to be validated on all the nodes in the solution. So now the configuration data looks like below:

Stage 3 – Using PSRemotely for remote ops validation

At this stage in the authoring workflow, we have our tests ready along with the environment configuration data in hand. Before using PSRemotely to target all the nodes for deployment readiness we have to ask this question, How do we connect over PSRemoting to these nodes?

Are the nodes domain joined?

Connect using the DNS name resolution or IPv4/IPv6 addresses for the remote nodes?

Connect using the logged in user account or specifying an alternate account?

Based on the answers to the above questions usage with PSRemotely DSL varies a bit and most of them are documented. For this scenario, the DNS Name resolution of the nodes is used (nodes are already domain joined) and the logged in user account will be used to connect to the remote nodes.

Now it is time to wrap our existing operations validation tests inside the PSRemotely DSL. The DSL consists of two keywords PSRemotely and Node. PSRemotely is the outermost keyword which allows the framework to:

Specify that all ops validations tests are housed inside a <filename>.PSRemotely.ps1 file.

Getting back to the problem at hand let’s wrap our existing Pester tests inside the PSRemotely DSL. This is straightforward for the problem at hand and looks like below. We can save the contents of below code snippet in a file called S2DValidation.PSRemotely.ps1 (PSRemotely only accepts file with .PSRemotely.ps1 extension).

Note – Take note of how the hard coded value for domain username (S2DClusterAdmin) from the standalone Pester tests is replaced with node specific configuration data e.g. $Node.DomainUser.

We are all set and have two files in the directory e.g. EnvironmentConfigData.psd1 and S2DValidation.PSRemotely.ps1, it is finally time to invoke PSRemotely and give remote operations validation a go.

We can use Invoke-PSRemotely in the current directory to run all the operations validation housed inside it or specify a path to a file ending with *.PSRemotely.ps1 extension.

As shown in the above video, for each node targeted a JSON object is returned. In the returned JSON object the property Status is true if all the tests (Describe blocks) passed on the remote node. Tests property is an array of individual tests (Describe block) run on the Remotely node if all the tests pass then an empty JSON object array of TestResult is returned otherwise the Error record thrown by Pester is returned.

For the node which failed one of the validations, the JSON object looks like below. Individual TestResult will contain more information on the failing tests on the remote nodes.

For the failed node, we can quickly verify that out of the two validations targeted at the remote node only one is failing.

Now there could be a lot many reasons on why the operation validations tests on the remote node are failing. In the next post, we will take a look at how to connect to the underlying PSSession being used by PSRemotely to debug these failures.

]]>http://www.powershellmagazine.com/2017/06/29/psremotely-authoring-workflow-part-1/feed/212743PSRemotely – Framework to Enable Remote Operations Validationhttp://www.powershellmagazine.com/2017/04/07/psremotely-framework-to-enable-remote-operations-validation/
http://www.powershellmagazine.com/2017/04/07/psremotely-framework-to-enable-remote-operations-validation/#commentsFri, 07 Apr 2017 16:00:40 +0000http://www.powershellmagazine.com/?p=12611Read more »]]>Before we get started with what is PSRemotely, here is some background.

As part of my work in an engineering team, I am tasked with writing scripts which will validate the underlying infrastructure before the automation (using PowerShell DSC) kicks in to deploy the solution.

Below are the different generic phases which comprise the whole automation process:

Pre-deployment – getting the base infrastructure ready, the bare minimum required for automation. For example – network configuration on the nodes is needed.

What I meant by validating underlying infrastructure above, is that the compute and storage physical hosts/nodes have a valid IP configuration, connectivity to the AD/DNS infrastructure etc. the key components that we required to be tested and validated to get confidence in our readiness to deploy the engineered solution on top of it.

Note – Our solution had scripts in place that would configure the network based on some input parameters and record this in a manifest XML file. After the script ran, we would assume that everything is in place. These assumptions at some points cost us a lot of efforts in troubleshooting.

In short, initial idea was to have scripts validating, what the scripts did in an earlier step. So it began, I started writing PowerShell functions, using workflows (to target parallel execution on nodes). This was a decent solution until there were requests to add validation tests for entirely everything in the solution stack e.g. DNS configuration, network connectivity, proxy configuration, disks (SSD/HDD) attached to the storage nodes etc.

We went into looking at how to use some of the open source PowerShell modules into helping us perform operations validation. At this time in community, Pester was gaining traction for the operations validation.

Using Pester

We moved away from using standalone scripts for the operations validation and started converting our scripts into Pester tests. It is not surprising to see that many operations people find it easier to relate to using Pester for Ops validation, since we have been doing this validation for ages manually. Pester just makes it easy to automate all of it.

For example, in our solution each compute node gets three NIC cards, pre-deployment script configures them. If we had to test whether the network adapter’s configuration was indeed correct, it would look something like below using Pester:

Using PoshSpec & Pester

PoshSpec added yet another layer of abstraction on our infrastructure tests by adding yet another DSL.
Below is how our tests started looking with usage of Pester and PoshSpec.

Note – For validation of IPv4 Address, another keyword named IPv4Address was added to PoshSpec which would essentially call Get-NetIPAddress and spit out the IPv4 address assigned on the NIC interface with specified alias.

At some point Ravi was tinkering with this particular PowerShell module and suggested to take a look at it. It was promising to begin with as he added support for passing Credential hash to Remotely. We would have to specify a hash table with the computer name as key and credential as value to Remotely and it would take care of connecting to those nodes, executing the script block in the remote runspace. At this point things started falling in place for what we had in mind. Our tests started looking nice and concise:

Soon we realized that the Assertions above e.g. {Should Be ’10.10.10.1’} are to be dynamically created by reading the manifest XML file which drives the whole deployment. It contains what is the expected configuration on the remote nodes.

We wanted our tests to be generic so that we could target them to all nodes part of the solution. We were looking to have our tests organized like below, where of course a node-specific details e.g. $ManagementIPv4Address etc. would be read from the manifest file and created on the fly either on the local machine or remote node :

Remotely connects each time to all the nodes to run each PoshSpec based ops validation tests. Results in lot of overhead to run a large number of validation tests.

Trouble passing environment specific data to the remote nodes e.g. in the above tests passing the expected IPv4 address to the remote node.

For running Pester/PoshSpec tests on the remote nodes, these modules need to be present on the remote node, to begin with.

The existing Remotely framework was meant to execute script block against a remote runspace but it was not specifically built to perform operations validation remotely.

Enter PSRemotely

After trying to integrate Remotely with Pester/PoshSpec based tests, we had a general idea on what we needed from a framework/DSL, if it was to provide us with the capability of orchestrating operations validation remotely on the nodes. Below are some of those features we had in mind along with the arguments for these to be implemented:

Target Pester/PoshSpec based operations validation tests on the remote nodes.

Allow specifying environment data separately from the tests, so that same tests could be applied across on nodes.We decided on the ability to use DSC Style configuration data here for specifying node specific environment details.

Easier debugging on the remote nodes, in case tests fail.If something failed on the remote node during validation, we should be able to connect to the underlying PowerShell remoting session and debug issues.

Allow re-running specific tests on the remote nodes.In case a test failed, performing a quick remediation action and validating that specific test passed it is a good to have feature when you have lot of tests in your suite.

Self-contained solution.Have the framework bootstrap the remote nodes with required modules version (Pester & PoshSpec) under the hood. Remote nodes might not have internet connectivity here.

Allow copying required artifacts to remote nodes.For our solution, we require a manifest file with details about the deployment to be copied on each node.

Use PowerShell remoting as underlying transport mechanism for everything.

Return bare minimum JSON output, if everything passes. If a test fails then return the error record thrown by Pester.

After having a clear idea on the features required in the framework and how we wanted the DSL to look like, I started working on it. This post has set up the context on why we began working on something entirely new from scratch.

Join me in the second post where I try to explain how to use PSRemotely to target remote nodes for operations validation.

]]>http://www.powershellmagazine.com/2017/04/07/psremotely-framework-to-enable-remote-operations-validation/feed/112611Pester Explained: Describe, Context, and It Blockshttp://www.powershellmagazine.com/2015/12/03/pester-explained-describe-context-and-it-blocks/
http://www.powershellmagazine.com/2015/12/03/pester-explained-describe-context-and-it-blocks/#commentsThu, 03 Dec 2015 17:00:12 +0000http://www.powershellmagazine.com/?p=12192Read more »]]>This article is a part of a larger series on Pester.

Last time, we looked at how assertions work in theory, and how they are implemented in Pester. This gave us the foundation to understand how tests are failed, but in order to fail a test we first need to run it. So this time we will have a closer look at It, Context, and Describe and will create our own Test runner.

Poor man’s test runner

In the simplest case, you do not need much to run test script code. Actually, we already did it in the first article where we tested our Assert-Equal assertion.

We simply take a piece of code and wait for it to fail or pass. So technically, we do not need a test runner, but it makes our lives a lot easier. It takes care of looking up all tests in a test suite, shows a nicely colored and formatted output, and enables us to organize our tests a little better.

Making our own test runner

The main reason to have a test runner, though, is to be able to run all tests in the test suite even if some of them fail. To be able to do that we need to catch any exception and translate it to textual output or some other harmless type of output.

In the Assert-Throw assertion, we already did something very similar. We captured an exception so the user could not see it. To do that we used a try-catch block and wrapped it in a function like this:

In the Test-Case function we are taking a piece of code wrapped in a ScriptBlock. We execute this code using the & invocation operator and wait for it to either succeed or fail. By failing, we specifically mean that an exception was thrown. When an exception is thrown, for example by an assertion function, the code jumps inside of the catch block and outputs a red message to screen to notify us that the test code failed.

If no exception is thrown the code will write a green message, that lets us know that our code did not fail, which means that our test passed.

Create a suite of tests

Now we are ready to take what we learned so far and create a suite of two tests:

Pay attention to the output. You can see that both tests run even though the first test failed. We just created our own test runner!

It

The Test-Case function is roughly equivalent to the “It” function of Pester. “It” hosts a single test and prevents any failed test from failing the whole suite.

The actual implementation of It is riddled with input validation, testing the framework state, skipping tests, making them pending and so on, but the basic idea is still the same. Look at the implementation of Invoke-Test function and you will find the familiar pattern.

More interesting bits of It

There are few more interesting bits in Pester’s It function. Those are not important to understanding the framework as a whole, but the It function implements so much that I feel obligated to describe at least some of those.

Any output of the ScriptBlock is assigned to $null, which simply means that the output is discarded.

The script invocation is placed inside of a single-iteration “do until” loop. The code in this loop will only run once, and so the loop looks useless, but the opposite is true. When the “break” keyword is used in test code without any surrounding loop, it would jump outside of the “It” block and would make the test suite fail unexpectedly. To prevent this a single-iteration-loop is added around every test case.

The setup and teardown functionality enables you run code before and after every test case. Using a combination of try-catch-finally and try-catch blocks it guarantees that the teardown code will run even if the test fails, but exception in the teardown code will not fail the whole suite.

Skip and Pending parameters enable you to force the test into states different from Pass and Fail. Skip and Pending states are useful for temporarily putting some tests on hold and for notifying you of empty tests.

TestCases parameter enables you to define examples of input and expected values, which will result in the test being run once for each of the examples.

Context and Describe

The Context and Describe are mainly the so-called syntactic sugar, a language construct that helps us explain our intentions better. They also let us organize the code in per-feature and per-use-case groups. Context and Describe differ slightly in Pester, but we will disregard those minor differences and will implement both groups by a single function.

The Test-Block again uses the familiar try-catch pattern. This is not by accident. Any code can be provided in the $ScriptBlock. This means that the code in Test-Block might fail, and we need to protect our test runner from that. We do not want exceptions to prevent us from running all test blocks.

Apart from that, we also thrown in some nice indentation. The indentation level is stored in a global variable and simply translated to tabulator spacing.

Summary

This concludes our review of Describe, Context and It. Hopefully you saw that a test runner is very simple at its core. Both Test-Case and Test-Block use very similar code. They execute input ScriptBlock and handle every possible exception.

Do not worry if the implementation of Test-Case seems too simplified in comparison to It. The It function handles more concerns than just error handling and output that we described in this article. We will look at those concerns in some of the upcoming articles.

]]>http://www.powershellmagazine.com/2015/12/03/pester-explained-describe-context-and-it-blocks/feed/212192Pester Explained: Shouldhttp://www.powershellmagazine.com/2015/12/02/pester-explained-should/
http://www.powershellmagazine.com/2015/12/02/pester-explained-should/#commentsWed, 02 Dec 2015 17:00:32 +0000http://www.powershellmagazine.com/?p=12188Read more »]]>This article is a part of a larger series on Pester.

Last time we looked at the theory of assertions and what mechanisms they use to fail our tests. We’ve also written two assertions of our own to become a part of our own test framework. In this article we won’t be continuing on out own framework though. Instead we will look more closely on the actual implementation of Pester assertions and walk through the process of failing a test in Pester.

In this article we will be looking just at the Should Be family of assertions, but keep in mind that the rest of the assertions work the same way. A condition is evaluated (be it some value comparison, if a file exists, or if an exception was thrown) and if the condition is not satisfied (False) an exception is thrown.

Digging in

When you look inside of the sources of Pester you will find a whole folder dedicated to assertions. This folder is, unsurprisingly, called “Assertions” and resides inside of the “Function” folder. Similarly as the assertion keywords are split in two words, Should and Be, the assertion implementation is also split in two kinds of files. The Should.ps1 that defines the shared logic of all Pester assertions and Be.ps1, Throw.ps1, Exist.ps1 etc. which contain logic specific to the respective assertions.

Be.ps1

Looking inside of Be.ps1, on the top of the file there is the function that we’ve been looking at in the previous article, the equality condition that determines the result of the test:

It uses the standard equality (-eq) operator of PowerShell, and returns a Boolean (true or false) result. Throwing the exception is done elsewhere.

You might also notice that the function is called “PesterBe” rather than “Be”, this naming convention was chosen in the early versions of Pester to avoid conflicts with user defined code. This need was eliminated by putting the whole Pester runtime in different scope (in version 3), effectively hiding the internals of Pester from user code. The details of how that is done will be described in one of the future articles.

Further down the file you may also notice an implementation of another, but very similar, assertion named PesterBeExactly. This case sensitive version of the Be assertion uses the case sensitive equality operator (-ceq), and so the different behavior only applies to strings.

Both the assertion condition implementations are accompanied with multiple function that produce various failure messages. The ones with Exactly in name are used when the BeExactly assertion is used, and the ones with Not in name are used when a negative version the assertion (e.g. Should Not Be) is called. The functions are also type aware so a message pointing at first different character will be produced when comparing strings.

The assertion condition (the PesterBe function) remains the same for the negative and non-negative call of the assertion though, the result is simply negated when a negative assertion is used.

Should.ps1

The Should.ps1 file holds the shared logic for the assertions. When calling an assertion you are in fact invoking a function named Should that takes pipeline input and an indeterminate amount of arguments (notice the $args variable and no param block).

This, in theory would mean that you can have a very rich API for assertions, but in reality parsing a vast amount of different inputs correctly, while keeping intuitive syntax is difficult to do, and so the parsing logic is kept very simple. In general the expected input is assumed to be this:

<Expected> | Should (optional)Not <AssertionName> <Value>

Any additional arguments are simply ignored.

The processing of the input is done in the Parse-ShouldArgs where the captured input is processed. Let’s see how a “1 | Should Not Be 10” would be processed:

The “Be” assertion method is translated to “PesterBe”, referring to the function we saw earlier in the Be.ps1 file. The “Not” is captured as “PositiveAssertion:$False”, and the expected value obviously became the “ExpectedValue”.

Notice that the actual value is not captured in the output. That is because the call to Parse-ShouldArgs is placed in the begin block of the function where the pipeline output is not available. The actual value will be captured later.

Note: This approach to calling functions is totally incoherent with the rest of PowerShell cmdlets. In your functions you should follow the correct approach of defining named parameters that take single argument value, or in special cases define ValueFromRemainingArguments attribute (see Write-Host). Avoid using the $args for anything else than getting an indeterminate amount of data. The way the $args is used in Pester is the legacy of the early versions of Pester where at first a fluent API like syntax was used, which was later migrated to the current approach in attempt to closely follow the language of RSpec testing framework. We are aware that the current syntax could be improved greatly, but unfortunately it’s so widely used that it is unlikely that it will go away any time soon.

When the arguments are parsed by the Parse-ShouldArgs and saved in the $parsedArgs, the Should command enters the end block. In this end block the actual value provided through the pipeline (in our case number 1) becomes available. The Should then continues to stepping through its pipeline input, invoking Get-TestResult on each of them.

The Get-TestResult function, residing in the Should.ps1 file, is rather simple. It takes the parsed should arguments (including the expected value) and the actual value and returns a boolean result. To determine the result it invokes the assertion condition (the PesterBe function) on the expected and actual values.

The invocation of the assertion condition function is done via the ‘&’ invocation operator. This works because of the aforementioned naming convention for those functions: Pester + <AssertionName> (e.g. Pester + Be).

At this point we know whether the assertion passed of failed, but we only have a True/False result, no exception was thrown yet. And that is the last thing that happens in the Should function.

The result of the call to Get-TestResult function is inspected, and if it is False a failure message is obtained. Then a Pester specific exception is thrown. This exception will stop the test from executing and will fail it. Exactly as described in the previous article.

Note: The failure message is obtained, by a pretty much the same process as getting the result of the test. But instead of invoking the assertion condition function (PesterBe), an assertion message function is called (NotPesterBeFailureMessage), producing the appropriate message.

Comparing the implementations

In the previous article we created a simple implementation of an assertion that did not take into account any parsing issues, nor different types of input objects as well as pipeline input. This left us with an extremely simple implementation, consisting only of a single “if” and “throw”:

Which confirms that the theory that we learned the last time is applied in the actual code of Pester.

Summary

In this article we looked closely at the implementation of the Should command in Pester. Described the process needed to fail an unsuccessful test and compared the theory that we learned with the actual implementation.

Next time we will look at the It and Describe blocks, how the tests are actually executed, and how the suite prevents failing on every failed test.

]]>http://www.powershellmagazine.com/2015/12/02/pester-explained-should/feed/112188Pester Explained: Introduction and Assertionshttp://www.powershellmagazine.com/2015/12/01/pester-explained-introduction-and-assertions/
http://www.powershellmagazine.com/2015/12/01/pester-explained-introduction-and-assertions/#commentsTue, 01 Dec 2015 17:00:41 +0000http://www.powershellmagazine.com/?p=12182Read more »]]>This article is a part of a larger series on Pester.

I always found the word framework intimidating. It’s probably because my first encounter with the word was in .NET Framework, which at that point in time was total magic to me. There were tons of classes, keywords, and other things, and everybody except me, seemed to know the secret formula to connect the pieces together to make them do awesome stuff. And I was sitting there copying code from a book, being unable to make it work most of the time. And when it worked I was just waiting for the moment when the magic stops working and my code will break again. All in all, it took me a lot of time to understand that a framework is just code. I am not saying that I am a .NET expert now, but it feels liberating to know that no magic is likely happening in the core of my program.

This brings me to Pester. Pester is also called a framework, and what’s more a TESTING framework. That’s like double magic, or at least it was for me when I started learning about testing. And I am afraid a lot of you are in the same position. Wanting to learn how to test, but always fearing that the magical stuff that’s happening inside of the framework will stop working.

But you can’t be further from the truth, Pester is just code, and in the basic form quite simple code. And that’s why I am writing this series of posts that will cover the basic building blocks of Pester, and where you will get to write your own simpler version of Pester.

In the end you will hopefully be convinced that a framework is not much more than a pile of code that does exactly what it’s told.

Assertion theory

Let’s start from the end, that is from the assertions, and work our way from the inside out.

The assertions are what decides whether the test will pass or fail. An assertion in Pester is represented by word Should and a second word such as Be or Exist, that determines what kind of assertion should be used. The most used combination of those words is Should Be, that tests whether the actual and expected values are equal, so we will use that for our first example.

$expected = 8
$actual = 4
$actual | Should Be $expected

The $actual value typically wouldn’t be hardcoded in the test; rather it would be a result a some function call, such as Get-ProcessorCoreCount. The test would actually look more like this:

This brings one problem though–if we put two such assertions in a row, it does not matter if the first one fails or passes, because only the last assertion would determine the outcome of the test.

That is definitely not correct. We want the first assertion to fail to stop the execution of the test. This can likely be done in many ways, but all the testing frameworks I know, throw an exception to do that. Our assertion would look like this:

Writing your own assertion

I promised that we will write a testing framework of our own, so let’s start. It won’t be exactly like Pester, but I will be pretty close. Our own framework will definitely need to have assertions, but we will avoid all the unnecessary parsing Pester does and we will create a function called Assert-Equal which will be very simple:

Expecting exceptions

As we saw earlier using exceptions to fail tests is great because the test will stop executing as soon as any assertion fails. There is also other reason why exceptions are used in most of testing frameworks: When code fails, it usually throws an exception which in turn will make your test fail without using any assertion.

But what if throwing an exception is the expected outcome of the test? What if we want to test that our code throws FileNotFoundException when we try to read from a file that does not exist?

Notice that the call to the Read-File is commented out, so by default no code will be executed in the try block, and hence no exception will be thrown inside of the try catch block. This means that the $exceptionWasThrown will remain $False, and so the last “if” will throw an exception saying ‘Expected an exception to be thrown but no exception was thrown.’, which will fail the test.

Now try to uncomment the call to the Read-File. The read file will throw an exception (unless you actually have a file called NotExistingFile.txt on your C: drive, of course). This exception will be swallowed by the catch block, preventing our test from failing. The catch block will also set the $exceptionWasThrown variable to $True, the condition of the last “if” won’t be satisfied, and as a result the whole script won’t produce any output, meaning that our test passed.

Assert-Throw

We could definitely use such useful assertion in our own framework, so let’s create a function named Assert-Throw, which will look like this:

As you can see the body of the function is almost the same as the code in the previous script, the only challenge that we needed to solve was getting the piece of code that should throw an exception as a parameter. We used a script block for that. A script block is a piece of code that we can execute at the right time and place using the ‘&’ operator.

You can try using our new assertion, but expect no output because the Read-File throws an exception, which is what we want to happen and so our test passes:

Is there more to it?

Now that we covered two different assertions, you might ask: Is there more to it? And the answer would be: No, not really.

Every other assertion in Pester is just a variation on the Should Be (or in our case Assert-Equal). The difference between Should Be, Should Match, or Should Exist is only in the condition where in the “if” condition they use -eq, -match and Test-Item respectively, but the mechanism remains the same.

Single exception of this rule is the Should Throw assertion which on the outside acts same as the other assertions, but internally uses slightly different code to throw assertion when none was thrown, and do nothing when any assertion was thrown. If you’d like to compare the actual implementation in Pester with our own simplified assertion, and I encourage you to do so, please go to line 11 in PesterThrow.ps1.

Summary

So far you learned how assertions work internally and how they make tests fail. Next time we will look at the insides of Pester, and walk through the actual assertion implementation.