GivingSomethingBackhttps://maikkoster.com
... after taking a lot ....Thu, 09 Apr 2020 20:24:33 +0000en-US
hourly
1 https://wordpress.org/?v=5.4.1Creating a collection of VPN deviceshttps://maikkoster.com/creating-a-collection-of-vpn-devices/?utm_source=rss&utm_medium=rss&utm_campaign=creating-a-collection-of-vpn-devices
https://maikkoster.com/creating-a-collection-of-vpn-devices/#respondThu, 09 Apr 2020 21:30:00 +0000https://maikkoster.com/?p=847With so many people working from home these days, handling devices that are connected via VPN has become challenging for many companies. Looking at this from the ConfigMgr perspective, there are several aspects that we might want to do differently, depending on if a device is connected via VPN or on-premise. We might want to handle patching differently, might want to adjust a few client settings, etc.

However, that still doesn’t really tell us, which devices are actually connected via VPN.

How to identify a device connected via VPN

There are different ways to identify, if a device is connected via VPN. One of the easiest in ConfigMgr is simply based on the boundary. And that’s the one we will be concentrating on in this post. You very likely have one or multiple IP ranges for your VPN clients. And I assume, that you have already created boundaries based on these IP ranges. And all these VPN related boundaries should be within one Boundary Group, that has no other boundaries assigned. For certain scenarios it might even be useful to have multiple boundary groups, but that doesn’t really change our approach. The key aspect here is, that this VPN Boundary Group(s) only contain VPN related boundaries.

Which made me think, as this information is available within the ConfigMgr console, we should theoretically be able to use this information to create a collection that contains all devices, that are connected via VPN. As this information seems to come from the Fast Channel it should also be pretty up to date as well. However, as the documentation says that it updates “at most every 24 hours”, this could be a wrong assumption. This needs some further testing to see how timely accurate it actually is.

SMS_CombinedDeviceResources

So where do we now find this information? Digging through the internals of ConfigMgr, I found the SMS_CombinedDeviceResources class, which has the property “BoundaryGroups”. It actually has lots of other interesting properties as well, which basically matches the list of columns that are available when listing devices of a collection. So I’d say it’s fair to assume that this is the underlying class that is used by the ConfigMgr console.

Based on this information, all it takes now is to create a query where we join this class and filter based on the BoundaryGroups property. The query statement could look like:

Assuming, that the boundary group(s) has “VPN” in its name. Please adjust, if you used a different naming for your boundary group. The query evaluates relatively quick, even in large environments. Might even want to enable incremental updates on the collection, so new machines on VPN are added as quickly as possible.

Boundary Group Caching

If you are not yet on ConfigMgr version 2002, you can get similar behavior by leveraging the information from Boundary Group Caching, which has been added in version 1511. It’s basically the same source of information, however on default, it’s only available on the client. Jason Sandys has a great post about “Boundary Group Caching“, where it is stored and how to extend your current hardware inventory to collect this information and make it available this way.

If you added this information according to Jason’s post, you could get a similar result using

Where XXXXX is the internal ID of the boundary group. We can’t really join the BoundaryGroupIDS to SMS_BoundaryGroup and use the name, as this value is a comma separated list of all boundary groups the client is a member of. Same reason for using a LIKE, rather than an equal. This shouldn’t be a big problem, as we assume you are only dealing with a single, or at max very few boundary groups.

Biggest drawback here though, as it’s based on hardware inventory, the information is not as up to date as you might need it to be.

SMS_Boundary

Another way on getting similar results could be leveraging the SMS_Boundary class and join it to the IPSubnets property of SMS_R_System. However this will not be feasible for this scenario, as the boundaries need to be configured as IP Subnet and not as an IP Range. The bad part being, that it needs to match the subnet as it’s configured on the client side. VPN solutions typically take IP ranges and assign a single IP subnet with a 255.255.255.255 network mask to each VPN clients. So to be able to join them properly, you would have to replicate this configuration on the boundary side, which would mean you end up with thousands of micro boundaries. Also this is again based on hardware inventory, so might lack timely accuracy. I just wanted to have it listed here, as it actually has some other use cases, if you have boundaries defined as IP subnets and would like to create a collection based on this information. However, if you are on version 2002, I’d definitely recommend using SMS_CombinedDeviceResources for those cases as well.

]]>https://maikkoster.com/creating-a-collection-of-vpn-devices/feed/0Tunnel a PowerShell script to a remote machine and invoke via WMIhttps://maikkoster.com/tunnel-a-powershell-script-to-a-remote-machine-and-invoke-via-wmi/?utm_source=rss&utm_medium=rss&utm_campaign=tunnel-a-powershell-script-to-a-remote-machine-and-invoke-via-wmi
https://maikkoster.com/tunnel-a-powershell-script-to-a-remote-machine-and-invoke-via-wmi/#respondWed, 11 May 2016 07:30:53 +0000http://maikkoster.com/?p=838In my blog post “Invoke a remote command without WinRM, psexec or similar – Access administrative shares even if they have been removed” I demonstrated how to use WMI to execute a command on a remote computer. The task was pretty simplistic as I only had to create a share. However as this was working pretty well and also out of curiosity I wanted to know if I can use the same process for more complex scenarios.

So I wanted to know, if I can also invoke a full PowerShell script via this way, while the script itself is not available on the remote computer.

Let’s start with a small script. I’m using a ScriptBlock for demonstration purposes, but reading a script file is working the exact same way. To keep it simple, I’m just reading the folders on the System drive and export them to csv file in the temp folder:

Now a quick look on how to execute a PowerShell script. Looking at the command line options for PowerShell.exe, most of you probably know the File parameter, which can be used to execute a script file. In the current scenario this won’t really help, as the script isn’t available on the remote computer. Another option would be the Command parameter, that takes either a string or a ScriptBlock. However, as our ScriptBlock/Script can contain special characters, line breaks, quotation marks etc, it might get complicated to escape them properly. A relatively unknown parameter is the EncodedCommand, which takes a Base64 encoded string.

So let’s get a Base64 encoded string from the ScriptBlock using the following snippet:

Execute and check on the remote computer if it created the file in your temp folder.

Tadaaaaa.

A few things to note:

The script should run completely unattended.

Make sure it’s handling errors and exceptions properly.

You won’t get any direct feedback from the script.

You will not be able to interact with any network location.

Variable substitution can be challenging.

]]>https://maikkoster.com/tunnel-a-powershell-script-to-a-remote-machine-and-invoke-via-wmi/feed/0CIM vs. WMI CmdLets – Speed comparisonhttps://maikkoster.com/cim-vs-wmi-cmdlets-speed-comparison/?utm_source=rss&utm_medium=rss&utm_campaign=cim-vs-wmi-cmdlets-speed-comparison
https://maikkoster.com/cim-vs-wmi-cmdlets-speed-comparison/#respondTue, 10 May 2016 05:00:15 +0000http://maikkoster.com/?p=760In one of my last posts I mentioned that for me the CIM CmdLets have been faster than the WMI CmdLets. I referred mainly to my personal impression working with these CmdLets for quite some time on a pretty large array of different servers and tasks and supported this statement only with some pretty basic test.

So I was interested in comparing different aspects of this and did some more testing. This is for sure not a complete performance evaluation. It’s more meant to get some numbers on this in different situations, and give me (and hopefully you as well) a guideline on when it might make sense to choose one or another.

The Test Setup

I executed all tests using the following snippet to measure the execution time:

All of them had been executed 100 times (10 times on listing the WMI classes) and the minimum, average and maximum execution time in milliseconds has been exported to a csv file. Also all the tests had been executed on two different computers, targeting different sets of computers.

1. Working with a single computer

First scenario is working on one single computer. That’s probably the most common scenario. A special case is the local computer. So I did all of my testing on the local computer first, then a few computers on the same, local network and then a few computers on a remote network over a WAN connection.

Localhost

Localhost was pretty much as expected. A single execution of Get-WmiObject was ~2.5-3 times faster than Get-CimInstance. Mostly due to that it has to create an implicit session first. If a session object is supplied, access is faster, as anticipated. However interesting here is, that Get-CimInstance using WSMAN is almost the same speed (~15% faster) with an explicit session as Get-WmiObject, but using DCOM is about twice as fast on a single class.

Listing all classes was definitely different than expected. Get-CimClass is almost 3.5 times slower than Get-WmiObject -List. No matter if using no session or the WSMAN session (CIM CmdLets will use an implicit WSMAN session on default), while in opposite using a DCOM based session was more than 3.5 times faster than the WMI CmdLet.

Local Network

To validate the above numbers, I had a look on the average of several computers on the local network. The numbers look similar, but the performance difference isn’t that large anymore. Get-WmiObject is “only” ~1.5 times faster when querying a single class, but using Get-CimInstance with a DCOM session is still about 3.5 times faster than this. Also WSMAN is more performant over the network as it was on the local computer

However, when listing the classes, Get-CimClass using a WSMAN based session was pretty slow again. Not as bad as on the local computer, but still about 30% slower than the WMI CmdLet.

I was curious if that’s a problem specific to Get-CimClass. So I executed a query that returned a pretty large amount of objects. In particular the Event log returning about 30-60K events on the computers that were used for testing. And it’s pretty similar. The WMI CmdLet was about 60% slower than the CIM CmdLet, but again there isn’t much difference between running Get-CimInstance without a session or with a WSMAN session. However using a DCOM based session was twice as fast than using the WSMAN based session.

Fast WAN connection

Next round of tests was on computers on a relatively fast WAN connection (~20-60 ms response time). And here the WMI CmdLets really lost their ground. Even without a session, the Cim CmdLets where about 4 times faster up to 21 times faster using a WSMAN session.

Pretty much the same when listing the classes, just the WSMAN session was slightly slow(er) again.

Slow WAN Connection

The last round of tests were executed against computers on a relatively slow WAN connection (~150-250 ms response time). While the total time was definitely different, the relative performance difference between the test was similar to the ones on the fast WAN connection. So not much to add here.

2. Working on multiple computers

The second scenario is working with multiple computers. For this the above described tests had been executed again, but this time not indivdiually in a loop per computer, rather passing in the list of computers (or sessions) to the WMI and CIM CmdLets. This allowed them to handle the degree of parallelization internally.

And here the CIM CmdLets could really shine. While the WMI CmdLets still executed each query individually and so the total time of execution was more or less the sum of the individual tests from above, it’s really obvious, that they are getting executed in parallel by the CIM CmdLets. The total time on them was always close to the slowest average execution time on the individual tests.

Conclusion

I have to say that those tests were pretty interesting. While this is definitely not a all-purpose, covering-everything type of test and your results might look different, I still think that it gives a pretty good indication on the performance aspect of both the WMI CmdLets and the CIM CmdLets.

My personal conclusion from those tests:

If you are targetting multiple computers, always use the CIM Cmdlets! Really!!!

Preferably use dedicated sessions when working with the CIM CmdLets.

If you are looking on the last grain of speed and are working on a relatively fast connection, prefer DCOM over WSMAN.

When working mainly on slow connections with large response times, you should prefer WSMAN.

However, this is just the “performance” side of things. For a final decision, lots of other aspects have to be taken into account as well.

I’d be happy to hear about your experience with this topic.

]]>https://maikkoster.com/cim-vs-wmi-cmdlets-speed-comparison/feed/0PowerShell CIM CmdLets – Working with lazy propertieshttps://maikkoster.com/powershell-cim-cmdlets-working-with-lazy-properties/?utm_source=rss&utm_medium=rss&utm_campaign=powershell-cim-cmdlets-working-with-lazy-properties
https://maikkoster.com/powershell-cim-cmdlets-working-with-lazy-properties/#commentsWed, 06 Apr 2016 08:10:38 +0000http://maikkoster.com/?p=775In my last post, I explained how to solve one of the stumbling blocks when working with the PowerShell CIM CmdLets. The second problem, which was both pretty nasty as it basically prevented me from using the CIM CmdLets properly for SCCM and pretty time consuming, as it was hard to find a solution, was working with lazy properties.

The Problem

Lazy properties are unique to SCCM. As some of the objects in SCCM can be pretty large or can consume a lot of resources, the SCCM WMI provider has an additional qualifier called “lazy”. All poperties that are marked with this qualifier won’t be returned on default. That means, if you e.g. get a list of SMS_AuthorizationList items, which is the equivalent of the “Software Update Groups” in the ConfigMgr console. the Updates property that contains a list of Updates of this Software Update Group, will be there, but it will be null. Or better, not set.

Why is this an issue?

First, you might actually need the value in your script. E.g. the above mentioned list of updates is probably one of the more important information in this object.

The bigger issue, which is actually a real big problem, arises, if you are trying to update an object. Lets assume you want to change the description of a SCCM object. So you query for this object and filter e.g. by the current name or its ID or similar. Then you update the description property and as soon as you save this object back to WMI, you have basically corrupted it, as all lazy properties have been erased! Be sure you wouldn’t be the first one that ran into this issue.

However, the object returned by Get-CimInstance has neither a __PATH property nor does it have a Get() (or equivalent) method!

The Solution

This was a really nasty problem for me, as I had started converting most of my scripts that deal with WMI related operations using the CIM CmdLets. And I was unable to find a proper solution. I wasn’t the only one as e.g Trevor Sullivan published an Introduction to the CIMCmdlets PowerShell Module on The Scripting Guys blog saying:

For example, people who are automating Microsoft System Center 2012 Configuration Manager, the lack of the __PATH property value is highly detrimental, as it does not allow them to deal with WMI classes & properties marked with the “Lazy” qualifier.

So whenever I had to deal with lazy properties in SCCM I had to fall back to the WMI CmdLets. Basically making the benefits of the CIM CmdLets useless.

At the end, the solution came again from the post CIM Cmdlets – Some Tips & Tricks from the PowerShell team. It wasn’t obvious, but “Tip #10 Making Get/Enumerate efficient” did the trick finally. They demonstrated how to refresh the data of an object, without retrieving it again:

PS:> # Get instance of a class
PS:> $p = Get-CimInstance -ClassName Win32_PerfFormattedData_PerfOS_Processor
PS:> # Perform get again by passing the instance received earlier, and get the updated properties. The value of $p remains unchanged.
PS:> $p | Get-CimInstance | select PercentProcessorTime

The solution is to update the current “limited” object that doesn’t contain any value for lazy properties, but the properties themselves, by passing it to the Get-CimInstance CmdLet and then save it back to the original (or optionally a new) object.

I hope you find this tip valuable. I was honestly struggling with this for months. So for me it definitely is, as I can now concentrate on one set of CmdLets without taking care about what type of CmdLet to use for what type of operation.

]]>https://maikkoster.com/powershell-cim-cmdlets-working-with-lazy-properties/feed/2PowerShell CIM CmdLets – Working with embedded or keyless classeshttps://maikkoster.com/powershell-cim-cmdlets-working-with-embedded-or-keyless-classes/?utm_source=rss&utm_medium=rss&utm_campaign=powershell-cim-cmdlets-working-with-embedded-or-keyless-classes
https://maikkoster.com/powershell-cim-cmdlets-working-with-embedded-or-keyless-classes/#commentsTue, 05 Apr 2016 08:00:14 +0000http://maikkoster.com/?p=747In my last post, I explained a bit, why I changed over to the “new” PowerShell CIM CmdLets. However as mentioned, there were a few painpoints that I struggled with.

The Problem

One of them was to work with embedded classes. These are classes that are “embedded” into other classes. Very often they are used to store more complex properties. For example the SMS_CategoryInstance class that represents a Category in SCCM, is storing the localized category name(s) in an embedded class called SMS_Category_LocalizedProperties.

you will end up with an error 0x80041089. According to the WMI Error constants, that’s WBEM_E_NO_KEY or better “User attempted to put in an instance with no defined key.” Looking at the definition of SMS_Category_LocalizedProperties again, it gets obvious, that this class doesn’t even have a key property. And now you are stuck. You can’t create an instance of this embedded class as it doesn’t have a key property. But New-CimInstance requires you to use one. And you can’t create a new Category, as LocalizedProperties (which takes an array of SMS_Category_LocalizedProperties) is a mandatory property and can’t be null.

The Solution

I have to admit, that it took me way longer than necessary to get this solved. The final hint came from this PowerShell blog post about CIM CmdLets – Some Tips & Tricks, in particular Tip #5 “Passing ref and embedded instances”.

The “trick” here is to use the ClientOnly switch when creating the instance. This will create an In-Memory instance of this class in PowerShell only, without going to the CIM server. So to create the above SMS_Category_LocalizedProperties instance, you can use the following snippet:

Don’t use the ComputerName or CimSession property when working with ClientOnly. Only use the proper Namespace and ClassName. It doesn’t matter if this namespace doesn’t exist on the local computer. You are actually more likely to run into other issues, at least that’s what happened to me and why it took me so long to figure this out to be the real solution.

Preferably supply all properties that you need to set already when calling New-CimInstance using the Property parameter. If you want to add a property later, use Add-Member. The following will do the same as the original snippet, it’s just harder to read:

The biggest advantage is, that you get an object with all properties that are defined in this class. I personally found this especially helpful when working with different versions of SCCM. If there e.g. is a property that is only available in newer versions, but you want to have the possibility to set this property if it’s available. Using this way, you can now check first if this property exists and only set it if available. That will make your script more resilient.

]]>https://maikkoster.com/powershell-cim-cmdlets-working-with-embedded-or-keyless-classes/feed/1CIM vs. WMI CmdLets – The top reasons I changed overhttps://maikkoster.com/cim-vs-wmi-cmdlets-the-top-reasons-i-changed/?utm_source=rss&utm_medium=rss&utm_campaign=cim-vs-wmi-cmdlets-the-top-reasons-i-changed
https://maikkoster.com/cim-vs-wmi-cmdlets-the-top-reasons-i-changed/#commentsMon, 04 Apr 2016 08:00:18 +0000http://maikkoster.com/?p=746Some time ago I moved most of my WMI related work from the “deprecated” PowerShell WMI CmdLets (Get-WMIObject, Invoke-WMIMethod, etc.) over to the “new” CIM CmdLets (Get-CimInstance, Invoke-CimMethod, etc …).

It wasn’t really because they are “deprecated”. Microsoft might treat them this way, but they will for sure be around for quite some time. It’s like VBScript, it still has it’s purpose and it will stick there. You can find a very good Introduction to CIM CmdLets on the Windows PowerShell Blog. For the typical user (so you and me ) there are often only small differences and for most of the stuff you typically do, they are kind of interchangeable.

What leads to the question, why should one use the new CIM CmdLets?

My personal top three benefits that actually made me use the CIM CmdLets are:

Nr. 3 – The CIM session

On default all CIM CmdLets support the ComputerName parameter that allows you to connect to a remote computer to execute the commands. Pretty much the same as the WMI CmdLets. However they all also have a CimSession parameter, that takes a Session object.

The CimSession allows you to use different credentials and even different authentication methods:

Nr. 2 – Parallel execution – “Fan out”

In my daily job, I often have to work with computers located around the globe. I need to collect information or execute some tasks on multiple of them, preferably in the shortest time possible.

Using the WMI CmdLets you can pass in an array of computer names. However those are processed sequentially. If they are now on a WAN connection, this just sums up. To be able to reduce the total time, you have to implement your own methods to execute it in parallel using e.g. Invoke-Async, Background Jobs, Windows Powershell Workflow with Foreach -Parallel, or similar options. In opposite working with multiple computers using the CIM CmdLets simply works like a charm and is pretty fast.

I did my own very basic speed comparison testing using 5 computers located at 5 different remote locations.

I left out the definition for $Servers as that’s just an array of above mentioned 5 remote computers.

And the results speak for themselve

WMI: 35.40861808 seconds.
CIM: 2.51654976 seconds.

I’ve seen some other speed comparisons that often come to the conclusion, that the WMI CmdLets would be as fast or even faster than the CIM CmdLets. But mostly they were either running hundreds of iterations to the same, sometimes even the local computer. Or they had put it into a custom for-each loop which would then basically disable the built-in capabilities of the CIM CmdLets in terms of doing parallel work.

In case you need to work with Sessions, this works the exact same way as using plain computer names

As you can see while it’s definitely easier than VBScript, it still has the drawback, that you need to supply the method parameters in a certain order. And if you think that this would be the order as specified in the documentation, you might be wrong. The documentation for the InitiateClientOperation method specifies the arguments in the following order:

Type

TargetCollectionID

RandomizationWindow

TargetResourceIDs

However, the method actually expects the arguments in the following order:

RandomizationWindow

TargetCollectionID

TargetResourceIDs

Type

What? Yes!!!

As you can imagine, this makes it pretty cumbersome working with methods sometimes.

The second option would be to get a list of parameters first and then pass them as an object.

I pesonally prefer that way of handling methods and passing in a hashtable that contains the name and values for the method parameters.

For sure there are other interesting aspects as well like the easy serialization/deserialization of objects via Export-CliXML and Import-CliXML, listing of classes using Get-CimClass, the automated conversion of e.g. the WMI DateTime format into the DateTime format used within .Net and PowerShell and a bunch of others.

Drawbacks

There are two issues, that you will run into, when working with the CIM CmdLets epsecially in SCCM. Sooner or later.

you will end up with an error 0x80041089. According to the WMI Error constants, that’s WBEM_E_NO_KEY or better “User attempted to put in an instance with no defined key.” Looking at the definition of SMS_Category_LocalizedProperties again, it gets obvious, that this class doesn’t even have a key property. And now you are stuck. You can’t create an instance of this embedded class as it doesn’t have a key property. But New-CimInstance requires you to use one. And you can’t create a new Category, as LocalizedProperties (which takes an array of SMS_Category_LocalizedProperties) is a mandatory property and can’t be null.

Doh.

2. Working with classes that contain lazy properties

A second thing, that is pretty unique to but also regular within SCCM, is the use of so called lazy properties. As some of the SCCM objects contain pretty large properties or properties that might take quite some time to generate or process, those properties are marked as lazy and will not be loaded on default. E.g. if you are iterating through a list of those objects, all lazy properties will be empty/null.

Using the WMI CmdLets, one has to execute an explicit Get on the WMI object to also load the lazy properties. And this is a necessity if you want to change a value! As if you don’t load the content of the lazy properties and then save the object, those properties will suddenly all be null! You wouldn’t be the first one who accidentally corrupts a ConfigMgr object by simply changing a value

With the CIM CmdLets, this problem isn’t as serious, as the Set-CimInstance CmdLet allows you to supply a list of key-value pairs that you would like to set. So you are no longer getting the object, update the values and then save the whole object. Rather get the object and explicitly set the properties that need to be changed. So no “sideeffect” on the lazy properties. But as there isn’t any Get method on the CimInstance object itself, you simply can’t read the lazy properties. Which might not be a problem, until you need to know the value of any of them.

For sure there are workarounds for those two issues. As this post is already way to large, I’ll give you the solution in a separate post.

However I’m very interested in your personal experience with the Cim CmdLets.

]]>https://maikkoster.com/cim-vs-wmi-cmdlets-the-top-reasons-i-changed/feed/2Exporting Task Sequences from ConfigMgr to plain xml fileshttps://maikkoster.com/exporting-task-sequences-from-configmgr-to-plain-xml-files/?utm_source=rss&utm_medium=rss&utm_campaign=exporting-task-sequences-from-configmgr-to-plain-xml-files
https://maikkoster.com/exporting-task-sequences-from-configmgr-to-plain-xml-files/#respondThu, 31 Mar 2016 15:56:12 +0000http://maikkoster.com/?p=733In the last blog post I showed a script that would allow you to import a Task Sequence from an xml file as the ones being created by the Task Sequence monitoring script (see https://maikkoster.com/versioning-monitoring-sccm-task-sequences/ for details).

As this covered a need that I basically created myself by publishing a script that creates those xml files, you might wonder why anyone would need a custom script to export Task Sequences, if there is already the Export-CMTaskSequence CmdLet from the ConfigMgr module?

Well, there are two major reasons for me:

The ConfigMgr Module requires the ConfigMgr console to be installed. I pesonally like to be independent from this for “simpler” or automated tasks.

The export from the ConfigMgr console or the PowerShell Module is a zipped file, rather than a plain xml file. It can contain referenced packages, applications, etc. While this will be a huge benefit for certain, probably most, scenarios, it’s kind of cumbersome, if you need to keep it simple.

Long story short, I actually need to have this functionality, so feel free to use it as well.

How to use it

The script comes with (hopefully) proper documentation. So calling the default

Get-Help .\Export-TaskSequence.ps1 -detailed

should give you a good start.

First, it requires the Path where the Task Sequences shall be exported to. In addition you need to supply the ID (Task Sequence PackageID) or the Name of the Task Sequence that you would like to have exported. You can also supply multiple PackageIDs or Names.

.\Export-TaskSequence.ps1 -Path "%Temp%\Export" -ID "TST00001"

On default, it will create a subfolder per Task Sequence and use the PackageID as the name with a 3-digit suffix. If you would like to change this behaviour, use the Filename parameter, which allows three different placeholders to be used.

#ID : which will be replaced with the PackageID of the exported Task Sequence Package

#Name : which will be replaced with the Name of the exported Task Sequence Package

#0, #00, #000, … : which will be replaced with an incrementing number based on the same name

The default value for the Filename parameter is “#ID\#ID_#000.xml”. If you use a pattern without incrementing number, you should also supply the Force parameter, otherwise the script wouldn’t overwrite an existing file:

As most of my other scripts that are targeted for automated operations as well, it will be quiet and doesn’t output anything beside errors. However especially if exporting several Task Sequences or working over a slow WAN connection, it might be helpful to show the current progress. For this, you can supply the ShowProgress parameter.

Finally, by adding the PassThru parameter, the script will output the path to each exported file, so it can be used for further processing.

There are two versions of this script available. The one referenced above is a standalone version, created by the script from my recent post “Creating standalone PowerShell scripts – automatically merging module components into scripts”. The “real” script is referencing a SCCM (ConfigMgr) module that I’ve published at Github as well. As I regularly apply changes to this module, it might be better to use the standalone version as this will be updated every time there has been a change on a function used by it. But that’s up to you.

One of my probably mostly used scripts is a VBScript that monitors for any changes on Task Sequences in SCCM and exports a copy for backup purposes when an updated task sequence is saved. Please see https://maikkoster.com/versioning-monitoring-sccm-task-sequences-update-for-sccm-2012/ for further details about the script itself. Even it has been written for SCCM 2007, it still works for 2012 and above. The “only” drawback is, that Microsoft changed the export format between the versiona 2007 and 2012. In 2007 and below, the exported Task Sequence was a plain xml file. Very easy to handle. And even better, very easy to adjust if it comes to replacing packages etc.

In version 2012 and above though, this simple xml file has been replaced with a rather complex zipped file. It still contains an xml file with the Task Sequence itself, but can also contain referenced packages, applications and a lot of additional information. This has for sure its advantages. However, for plain backup purposes, I personally still prefer the plain xml file. Also automated synching of Task Sequences between different SCCM environments or moving changes from one Task Sequence to a different Task Sequence is way easier when using a plain xml file.

So while I was using this method quite regularly at work I actually never publicly published a script that would allow you to import those task sequence xml files, that are generated by the above mentioned VBScript, back into a SCCM 2012+ environment. Shame on me, but here it is (finally)

The Solution

I finally published a cleaned up version of a script, that will allow you to import a Task Sequence that was exported via the above mentioned script and either replace an existing Task Sequence or create a new Task Sequence package.

Just to point this out again, this script won’t be able to import Task Sequences exported from the SCCM 2012+ console (It will be able to consume the object.xml file from this export zip file though)! Just plain xml files as created by the above mentioned monitoring script or the PowerShell equivalent which will allow you to still export plain xml in SCCM 2012+.

How to use it

The script comes with (hopefully) proper documentation. So calling the default

Get-Help .\Import-TaskSequence.ps1 -detailed

should give you a start.

First, it requires the Path to the xml file. In addition you need to supply the ID (Task Sequence PackageID) or the Name of the Task Sequence that you would like to have replaced with the content of the xml file. I pesonally wouldn’t use the Name parameter for this as it might not be unique, but that’s just me.

YES, this will replace the specified Task Sequence, so be careful and don’t execute this if you don’t know what you are doing. I will not be liable for any damages. Luckily for you, if you use my above mentioned script, it will automatically create a backup of the Task Sequence for you In addition this operation is configured with High impact. So on default, you will be asked to continue.

If you don’t want to replace an existing Task Sequence, supply the Create switch. If the Create switch is supplied, you have to specify the Name as well and can’t use the ID and can optionally supply a Description. The script will create a new Task Sequence package based on the Task Sequence of the xml file. Comes in handy for copying task sequences as well. If you want to know the PackageID of the new Task Sequence Package, use the PassThru switch, as without it, the script won’t output anything except of error messages if something fails.

On default, the script assumes to be executed on the SCCM Site server or SCCM Provider Server using the credentials of the current user. If you need to adjust this as you are maybe using this script from a different computer or need to use different credentials (e.g. when migrating a task sequence between different environments), use the ProviderServer, SiteCode and Credential parameters.

Finally, as mentioned already the script is quiet on default as it’s primarily meant for automation. If you want to get any feedback, you can set the PassThru parameter which will return the udpated Task Sequence Package. In addition you can set the ShowProgress switch to show the progress of the import process, which might be helpful over slow WAN connections. And it also makes use of the Verbose switch, that gives you some detailed information while it’s executing

There are actually two versions of this script. The one referenced above is a standalone version, created by the script from my recent post “Creating standalone PowerShell scripts – automatically merging module components into scripts”. The “real” script is referencing a SCCM (ConfigMgr) module that I’ve published at Github as well. As I regularly apply changes to this module, it might be better to use the standalone version as this will be updated every time there has been a change on a function used by it. But that’s up to you.

Organizing code in modules and re-using the same functions in different scripts is a common and recommended practice when working with PowerShell. I do have a lot of different modules created and grown over time that cover different aspects and that I use regularly in my daily work.

However on the other side, I also often need standalone scripts. What I mean by this are scripts that don’t reference any module or at least only modules available on default. If the scripts are used for automation purposes outside of my local computer, I would otherwise need to make sure that the module is available on all the machines as well. Using PowerShell within e.g. System Center Orchestrator or SMA runbook or as a step in a task sequence in SCCM/ConfigMgr, it often is a benefit to NOT have a reference to one or even several custom modules. Or if you need to share your script with someone, you would also need to share all the modules. Not to mention that it’s enough for most people to “just” call the sript. They don’t want to mess around with copying the modules to their profile etc.

Due to this, I often ended up copying the referenced functions from the modules into the script before I could make them available. Which isn’t a good practice at all, as now you have to maintain several copies of the same function.

The Solution

As we are talking about PowerShell, we are also talking about automation. So there must be an automated way on solving this. And yes there is. Now

I wrote a script that will analyse a given script for all function calls. Those will be compared to the functions from a supplied list of modules. Theoretically it could simply use a list of all imported modules, but as that could have negative sideeffects I preferred to define what modules to use. The functions from those modules that are called from the script will be copied to the script, typically to the beginning of either the script or the Begin block. On script based modules this will even include “hidden” functions, meaning functions that aren’t explicitly exported. Then the script will be analyzed again as there might now be additional functions being called from the functions that have been copied, etc. This will be executed recursively until all functions have been copied (or the maximum iteration level has been reached).

For the rest, I’ll give you some more details on how to use it first and then some more details on what it is doing under the hoods.

How to use it

The script comes with proper (well, at least some) documentation. So calling the default

Get-Help .\Create-StandaloneScript.ps1 -detailed

should give you a start.

It has two mandatory parameters

Path : which takes the path to the script that shall be converted. You can also supply an array of scripts.

Module : which takes a list of PowerShell modules. I preferred to explicitly specify the modules that are integrated into the script, rather than integrating all imported modules. If there is a need to this functionality, please feel free to update the script and/or get back to me.

On default, it creates a copy of the script in a subfolder called “Standalone“. If you want to change the name of that folder, use the “Subfolder” parameter. There is no option to use a different name for the script like adding a suffix or similar, as I preferred to have them exchangeable. You know what to do if you feel the urge to get that changed.

The functions from the module(s) will be added to either the beginning of the script right after the parameter definition (if there is any), or the beginning of the “Begin” block of the script (see Advanced Functions). As they have to be parsed first before they can be used, that was best place to put them without knowing any details about the script itself. I personally prefer to use the “Begin” block in a script to define my functions and then use the “Process” block for the, well, processing. Using the “Block” parameter, you can specify a different block like “Process” or “End“, but that’s mainly for sake of completeness.

Finally the “MaxIterations” parameter allows you to define how many recursive calls shall be processed at a maximum. The default is 10 and I personally haven’t had a script with more than 5 iterations yet.

On default, the script won’t show any message or progress. It will raise Errors if something fails, but as it’s meant for automation, there is no need for text output (Stop using Write-Host!). If you want to see a result, use the “PassThru” switch and it will return the path to the new script. Or use the “Verbose” switch to enable verbose logging.

Getting the definition of basically any function is pretty easy. Simply use the Get-Command CmdLet from PowerShell. The object that is returned from this has a property called “Definition“, which contains the function definition. This is primarily useful for script based modules, but does work on some built-in modules as well. e.g. the Get-IseSnippet from the “ISE” module is one of the shortest I could find for demonstration purposes:

As mentioned, this won’t work for most built-in modules. Also Get-Command will actually not return anything on “hidden” functions. These are internal functions of the module, that are not exported and are not supposed to be called outside of the module. However as the functions that we copy might call those internal functions, we simply have to copy them as well. To be able to get access to those internal functions, I’m using a small hack and call the Get-Command inside of the the module context:

The trick is, to use the PassThru switch on the Import-Module CmdLet, which returns an object that represents the imported module. Then we prepare a Get-Command statement as ScriptBlock and execute it “inside” of the module object.

If you like the script or have any feedback, please feel free to comment or get back to me.

]]>https://maikkoster.com/creating-standalone-powershell-scripts-automatically-merging-module-components-into-scripts/feed/0Invoke a remote command without WinRM, psexec or similar – Access administrative shares even if they have been removedhttps://maikkoster.com/invoke-a-remote-command-without-winrm-psexec-or-similar-access-administrative-shares-even-if-they-have-been-removed/?utm_source=rss&utm_medium=rss&utm_campaign=invoke-a-remote-command-without-winrm-psexec-or-similar-access-administrative-shares-even-if-they-have-been-removed
https://maikkoster.com/invoke-a-remote-command-without-winrm-psexec-or-similar-access-administrative-shares-even-if-they-have-been-removed/#commentsSat, 12 Mar 2016 14:37:21 +0000http://maikkoster.com/?p=714Recently I ran into a situation, where I had to check a few log files on some remote computers and also needed to execute some commands to fix an issue. However, due to reasons I’m not going to enlarge on, all administrative shares had been removed. So by this, no share was left that would allow me to access to the local file system. In addition the PowerShell CmdLet Invoke-Command couldn’t also help me out, as either PowerShell wasn’t installed (yes, oooooold systems) or WinRM wasn’t enabled/configured.

A typical task if WinRM isn’t enabled or properly configured is to execute the “winrm quickconfig” command via e.g. psexec, but due to the removal of the Admin$ share, typical weapons of choice for remote execution like psexec or similar wouldn’t work as well, as they initate their connection via the Admin$ share.

So what’s left? I could still use RDP or a similar tool. But most of those machines were Workstations, which would require me to get back to the local user, ask for a timeframe to either log on or take over his session etc. This would be a hassle and time-consuming for both of us. Not to speak that this doesn’t scale properly. So I took on the challenge and was looking for a “better” solution.

One option that I found was making use of the Win32_Process WMI class. In particular the Create method of this class, which allows to, guess what, create a new process. That would cover the second part of my problem, executing a command on the remote computer. But what about the log files. Well, how about creating a new share to check the log files, do our troubleshooting and remove the share aftwards?

All it takes is a PowerShell command to invoke a WMI method remotely. We can use either Invoke-WMIMethod or Invoke-CimMethod. In this case, Invoke-WMIMethod is probably a bit shorter: