PowerShell Workflows: Nesting

So far you have seen only a single workflow being used at a time. If you think for a minute about how you use your Windows PowerShell scripts, you probably notice that you build a number of functions that you re-use and call from other functions and scripts. The whole concept of re-usability should permeate your Windows PowerShell code so that you maximize the return from the time and effort you put into developing your code. The techniques you should adopt to ensure maximum re-usability are best left to another article, but for now, we’ll concentrate on how Windows PowerShell functions and Windows PowerShell workflows can be used inside other workflows. The whole topic of how you should design your workflows will be covered in a later article. Before you can do that design, you need to understand the mechanisms you can use to re-use existing functionality. That functionality breaks down into three broad groups:

PowerShell workflows – (integrating with workflows you create in Visual Studio is possible but beyond the scope of most administrators and beyond the scope of this series).

PowerShell functions – either in the same script file as the workflow or through a Windows PowerShell module.

PowerShell scripts – on the local or remote machine.

Let’s start by looking at how your workflow can interact with other workflows by using a practical example from your Active Directory administration tasks. It is generally regarded as good practice to clean up the accounts in your Active Directory. You would normally look at disabled accounts, expired accounts, and accounts with passwords that never expire. A decision can be taken on what to do with each account after you’ve identified accounts that match your criteria. To find disabled accounts, run:

Three simple scripts that will be familiar to Active Directory administrators. Using these is more efficient than trying to perform the task by hand, but you have to run them sequentially. Can workflows help us introduce some parallelism? The most direct approach would be to wrap the scripts into a single work flow:

This works very well with the three CSV files being produced almost simultaneously on my test system. The drawback to this is that you don’t have the ability to search for any of the individual type of accounts—you have to have them all. You have a couple of options. First, you can create individual workflows for each search and nest them:

This will load the individual workflows into memory so that you could use them individually. A simpler way that makes maintenance easier is to move the individual workflows out of the main workflow like this:

} Take this a stage further and separate your workflows into individual files and create a .psm1 file to load them as a module. You can then add functionality in a granular manner without affecting the bulk of your code. I’ve shown these workflows being used in parallel. This approach works if you need to ensure sequentiality—for example, if you need to create an Active Directory account before creating the mailbox. Functions are handled in a similar way:

workflow get-computersystem {

param([string[]]$computerName)

function get-fcomputersystem {

param ([string]$fcomputer)

Get-WmiObject -Class Win32_ComputerSystem -ComputerName $fcomputer

}

# The contents of the foreach block will be executed in parallel

foreach -parallel($computer in $computerName) {

if (Test-Connection -ComputerName $computer -Quiet -Count 1) {

get-fcomputersystem -fcomputer $computer

}

else {

“$computer unreachable”

}

}

}

get-computersystem -computerName $env:COMPUTERNAME In this workflow, a list of computer names is passed in through the computerName parameter. A foreach –parallel loop is used to iterate over the list of computers. Test-Connection is used to determine if the remote system is contactable, and if so, the function is called. In this case, the function is defined in the workflow. You could just as easily defined it outside the work flow as in the nested workflow example. Similarly, you could put the functions into a separate script and load them and the workflow as part of a module. The important point is that the workflows or functions you want to call are loaded, or defined, before you want to use them. Scripts are the third, and last, of the methods you can utilize to re-use existing code. Take the three scripts utilizing Search-ADAccount introduced at the top of the article and put each into a script so you will end up with three script files. I’ve called them:

get-disabledaccount.ps1

get-expiredaccount.ps1

get-passwordNexpire.ps1

You still want these to run in parallel, so you might try this:

workflow get-ADReport {

parallel {

c:adreportsget-disabledaccount.ps1

c:adreportsget-expiredaccount.ps1

c:adreportsget-passwordNexpire.ps1

}

} Unfortunately, this won’t work and you will see an error:

At line:3 char:4

+ c:adreportsget-disabledaccount.ps1

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Cannot find the ‘c:adreportsget-disabledaccount.ps1’ command. If this

command is defined as a workflow, ensure it is defined before the workflow

that calls it. If it is a command intended to run directly within Windows

PowerShell (or is not available on this system), place it in an InlineScript:

‘InlineScript { c:adreportsget-disabledaccount.ps1 }’

+ CategoryInfo : ParserError: (:) [], ParseException

+ FullyQualifiedErrorId : CommandNotFound

So, you could try this:

workflow get-ADReport {

inlinescript {

c:adreportsget-disabledaccount.ps1

c:adreportsget-expiredaccount.ps1

c:adreportsget-passwordNexpire.ps1 }

}

It works, but are you getting the parallelism you need? The way to ensure that you do is to run each script separately:

workflow get-ADReport {

parallel {

inlinescript {c:adreportsget-disabledaccount.ps1}

inlinescript {c:adreportsget-expiredaccount.ps1}

inlinescript {c:adreportsget-passwordNexpire.ps1 }

}

}

Each separate inlinescript section will be run in parallel. What about the situation where you want to run a script that exists on a remote system? Put the scripts in the C:ADReports folder on the remote machine and run your workflow as:

PS> get-ADReport -PSComputerName dc02

The scripts will run on the remote machine and, because we haven’t modified them, that’s where the output will be produced. All workflows have a number of parameters automatically added when they are created. They are documented in the About file:

Get-Help about_WorkflowCommonParameters

Summary

You saw in earlier articles that Windows PowerShell workflows can incorporate a number of elements:

Workflow activities

Workflow language features especially

Parallel

Foreach -parallel

Inline PowerShell scripts including cmdlets and language features for which workflow activities weren’t created

This article has shown you how to re-use existing code by incorporating:

PowerShell workflows

PowerShell functions

PowerShell scripts on local and remote machines

Workflows, as you have seen, are like Windows PowerShell but different. They are versatile enough that you should be able to perform whatever tasks you need; however, you may need to work a bit more to get to that point. One of the great strengths of workflows is that they can be stopped and restarted. This is because they utilize the Windows PowerShell job engine. That’s what we’ll look at next time, including how to check-point workflows and how to stop and re-start them. ~Richard Thank you, Richard, for another awesome article about workflows. Join me tomorrow when I will talk about more cool Windows PowerShell stuff. I invite you to follow me on Twitter and Facebook. If you have any questions, send email to me at scripter@microsoft.com, or post your questions on the Official Scripting Guys Forum. See you tomorrow. Until then, peace.