Note: When updating you will need to update any existing plugins as well.

Despite a seemingly quiet few months I have continued to enhance my script for maintaining software updates in WSUS and ConfigMgr. In fact, I’ve silently released new versions of the script from time to time by simply updating the binary hosted here. However, now there’s enough new stuff worth talking about.

Before we get into it I’d like to thank Chad Simmons for his contributions. Chad submitted several fixes and plugins that I suspect many will enjoy. Also, he fixed my atrocious spelling and has basically shamed me into using Visual Studio Code for any future development.

Using WhatIf will force the script to run regardless of the 24-hour timeout.

By default the script will only run once every 24 hours. This is an arbitrary amount of time and eventually I’ll probably make it configurable. However, this timeout serves two purposes. The first purpose is to avoid an infinite loop when you trigger the script from a status filter rule while using the ReSyncUpdates feature. Second, if you’re syncing multiple times a day for Defender updates it would seem overkill to run maintenance every time. When first implementing the script you should be running it a whole bunch of times using WhatIf. Therefore, it makes sense to ignore the timeout when changes are not being made.

WhatIf mode don’t check sync status when declining updates

Similar to the above, when running in WhatIf mode it doesn’t matter if your environment is syncing or not. Since it takes a non-trivial amount of time to check for sync status there’s no point doing so if no changes are going to be made anyways.

Added ExcludeByTitle, ExcludeByProduct, and IncludeByProduct options

This was a request I heard a few times so I’ve added it in. You can use these parameters to either exclude updates from being declined or to limit the scope to a certain product family. Honestly, I don’t think this feature has all that much value. To my thinking it would mean running the script multiple times to decline updates based on multiple different characteristics at the same time. That entire use case is what plugins are for. Speaking of plugins …

Updated Windows 10 Plugins

The ever sharp Justin Chalfant keyed me into the fact that although they don’t show up in ConfigMgr the Win 7 and 8.1 in-place upgrades to Win 10 are synced in WSUS. So I’ve updated the Win 10 plugins to handle those updates as well as created a plugin to remove them entirely. I’ve also updated the plugins to handle the name change from Home/Education/Pro/Enterprise/etc. to just Consumer/Business.

MOAR PLUGINS!!!!

I was super excited to see that the aforementioned Chad Simmons groked how powerful the plugin concept could be and was willing to share his work. There’s zero chance that the main script will support every use case administrators might come up with. I have zero interest in even attempting that goal. However, that’s exactly what the plugins are designed to handle. I really don’t care what convoluted logic you ‘need’ … just put it in a plugin and return a list of update IDs for the main script to decline. If you think others might benefit then by all means reach out and I’ll consider adding it in with the release. Thanks to Chad we now have plugins with more advanced logic for declining Itanium and 32-bit updates. I took the later plugin and released one that excludes Server 2008 (yea … I know). I likewise created a Windows 10 version script that excludes LTSC if you happen to be using that channel.

For Card Holding Members of the ‘Command Line is Too Damn Long’ Party

So … ok … things have gotten a little out of hand in terms of how many features and parameters this script supports. Add into that trying to pass in arrays and hash-tables as parameters and things get awkward real quick. So what I’ve done is created a new feature that will read the parameters out of a configuration INI file. Further, if you provide the script with no parameters at all it will default to use the config.ini file that is in the same folder as the script. An example default config has been provided that represents what I run in my own organization. Modify that example as you see fit and remove the WhatIf parameter from it when you are satisfied with the results. Note that relative paths are now supported to point to the configuration file itself as well as the log and output files.

Let’s Make WSUS Less … Terrible … Again

So this might be news to you but WSUS is … how can we say this … kind of long in the tooth at this point. As in, take it to the vet and do the right thing. Beyond just declining updates there’s two other things you can do.

The first thing is to make the database faster by adding indexes. A while ago Steve Thompson and Benjamin Reynolds went looking at the stored procedure for deleting obsolete updates to figure out why it took so long. The result was a great blog post you can read here: Enhancing WSUS database cleanup performance SQL script. TL;DR: The product team failed to index certain fields they rely heavily on. Fix that problem and suddenly things run hundreds of times faster.

The latest version of the script supports two new parameters: UseCustomIndexes and RemoveCustomIndexes. The first will create the indexes Steve and Ben talk about as well as some others that the community has found to be helpful. This, in theory, should solve the last mile problem for the WSUS Cleanup Wizard and make WSUS run faster in general. Keep in mind that you still need to do DB maintenance on WSUS’s database just like you would any other. I’m told the WSUS product group plans to add the indexes Steve and Ben found in the next release. Until then though, adding custom indexes isn’t exactly supported by Microsoft. For those fearful of living on the edge we call ‘unsupported’ I’ve added the RemoveCustomIndexes parameter to remove them if you so desire. Premier support will never be the wiser.

Less WSUS is Always More WSUS

The second thing you can do is to minimize the amount of updates in the database as a whole. Declining updates removes them from the catalog that WSUS generates and that clients process but it doesn’t actually remove them from the database. I recently ran the script on a client whose WSUS instance goes back nearly a decade. They had something like 40k active updates in the database. While I reduced that number dramatically they were still in the database and it was performing poorly. To remedy this I’ve added a parameter called DeleteDeclined which will actually delete updates from the database using the WSUS API calls. When used, the script will create a local data file in the script’s folder that tracks when an update was declined. It will then delete any update that was declined based on the ExclusionPeriod value. So if you decline updates after 3 months of being superseded they will be removed from the database 3 months after that for a total of 6 months after being superseded. Note that once deleted, it’s not easy to bring that update back. You may be able to manually import it from the Update Catalog or you may need to de-select the corresponding product, sync, select it again, and resync the entire product again.

What’s Next?

The next thing I want to dive into is how to orchestrate this kind of maintenance when you have multiple SUP databases and in CAS scenarios. In my research the recommendation seems to be to decline updates top-down but run the WSUS Cleanup Wizard bottom-up. I want to dive into that a bit and see how the script might be able to handle such scenarios and what it takes to configure. It seems like a dark hole of sadness so if you don’t hear from me for a while … pour one out for your fellow comrade. Until then, keep you stick on the ice.

84 Comments

New to powershell, been thrown in the deep end at school, did a fresh install of 2016 setup AD GP etc..
Now want to get updates pushed to clients PC’s.
Ran your script with everything set, how do I know its working?
It ran with no errors

By default the script will create a log file in the same directory of the script. You’ll find the results there. If it’s your first time running it I always love to see the first summary section that shows the initial reduction in active updates.

That just means the script is failing to convert the string ‘$False’ for the StandAloneWSUSSSL parameter. You might try removing the $ and see if that works otherwise I’ll have to test that a bit and get to the bottom of it. I’m also pretty sure that if you just comment that out the script will default to it not being SSL.

Is this WSUS instance part of a SUP or a standalone WSUS server? You say you’re running ConfigMgr 1810 but you’ve configured the script it to run on a StandAlone.

If it’s part of a SUP then don’t specify the StandAloneWSUS parameters. If it’s truly standalone then my guess is that you need to play around with the StandAloneWSUSSSL value somehow. I don’t have a standalone WSUS let alone one running SSL so I haven’t really been able to test that part effectively.

Wow – great script. Just wanted to check on streaming servers. We have this scenario (dont ask!)
ServerC (our SUP) is downstream from ServerB (where currently most get updates from) which is downstream from ServerA.
We control ServerC and ServerB but have no access to ServerA.
Do we run this stuff on ServerC first then ServerB?

If I recall correctly, the way downstreaming works is if it’s declined at a higher level, lower servers will have automatically inherited that status, so you (should) only need to do it against ServerB.

Yea, there’s a whole post right in that one question. Microsoft’s recommendation is to work bottom up because in the event that someone _does_ manually initiate a sync during your maintenance run it can break stuff. I’d argue that if you’re reasonably sure no one is going to manually kick of a sync then doing it all in parallel limits the amount of time you’re in that ‘vulnerable’ state where a sync would cause issues.

Its really all about what exactly is replicated. If you decline an update on serverC but not on serverB does the ‘not declined’ status get replicated on next sync and override the decline on ServerC? If so then Im in big trouble as I have no control over ServerA and they dont decline stuff. Ive just manually declined an update on ServerB and will wait for all the auto syncs and see what happens.

I declined an update on ServerB last week. After several syncs the same update on ServerC has not been declined. So the conclusion is that the ‘Declined’ status is not replicated. Which for me means that the WSUS maintenance scripts will have to be run at both ServerC and ServerB.

Hi Bryan, I am trying to use the plugin “Decline-WindowsX86” to decline all Windows 10 x86 updates but it doesn’t seem to be working. Below is the line I edited in the script to do so. Thanks in advance!
$SupportedWinX86Versions = @(‘Windows 7’)

Have to ask, have you ran the FirstRun and UseCustomIndex options? What that means is that the script simply failed to get the update metadata from WSUS because it timed out trying. It’s a bit of a catch-22 … WSUS is running too poorly to run maintenance.

I just tried with the latest version from november, and I’m still getting issue where it try to connect to the database and fail since it’s a WID. I tried with only the following
.\Invoke-DGASoftwareUpdateMaintenance.ps1 -UpdateListOutputFile ‘E:\sources\Invoke-DGASoftwareUpdateMaintenance\UpdateListOutputFile.csv’ -DeclineSuperseded

and it’s failing at the start (in the log file)

I tried to find in the powershell script where it try to connect but I’m a bit lost in the powershell code.

At line 1188, you have a return. Maybe that was before a function, but now, it’s straight in the root. So the Return actually exit the run instead of continuing. You must also remove the Ste-Location because it does other error.

So I doubt your edits actually result in the script actually functioning. I don’t have a WID database to test with directly and it’s a bit tricky to connect to them because there’s different versions of WID with different connections strings. So by commenting out those lines you’re simply bypassing the check to make sure that the script successfully connected to the database. Unless it connects, the FirstRun and CustomIndex features aren’t going to function.

Weird, I did post the log file, can’T see it here. I think it didn’t like the tag. Since I’m using WID, it will fail to connect to database (it’s on a different server then the site server). I just ran the script and now it’s going through. What happened before is after doing this check, it exited the powershell script completly instead of just using it as a flag. Log file was simple, only saying thishttps://pastebin.com/FJSmDsLQ

Pretty sure I know what’s happening here. Undo the changes you made and comment out line 708: $WSUSServerDB.ConnectToDatabase().
When I get the WSUS Database info the routine tries to connect to validate that the information is correct. You can’t remote into a WID database, that’s one of it’s many limitations, which is why you’re seeing the error. There’s really no reason to do that check other than it’s part of the API and super easy to do.

It does fixe it because now it doesn’t run the try-catch in that function, which resulted in a null return.

I’m unsure what exactly you want to achieve at 1183 and 1184. If you want to see if you have any information in WSUSServerDB, regardless if you can connect or not, then the problem is in the Try Catch @ 715, where you return when you fail connecting, but that doesn’t mean the information is wrong (thus get a blank WSUSServerDB returned). You could keep that for a test connection when needed, but since the function is Get-WSUSDB, it should always return the value of $WSUSServerDB, not just a plain return.

The FirstRun and CustomIndexes stuff connects directly to the DB and thus needs DB info. So the routine to get it is probably better nested inside of the features that actually require that though which is what I’ll do. Commenting out line 708 just confirms what was going on. All part of developing against WID by proxy.

Hope all is well. Wondering is there a way to remove specific updated out side of the expiration period the environment is set to. For example A/V definition files, there is no need to keep these around for extended periods. So wondering if there is a way to have them removed immediately after expired?

We never got to the point of using WSUS/ConfigMgr to manage definition updates so I’m a bit hazy on some of that stuff. However, when you say ‘removed’ where do you mean? When they show as Expired in ConfigMgr what is their status in the WSUS console? It’s been my understanding that MS expires the definitions pretty aggressively and in doing so solves the whole client catalog problem. In ConfigMgr there’s an unmanageable maintenance task that removes expired updates every 7 days.

I thought I meant from SCCM and WSUS but you are right that the unmanageable maintenance task removes the content every 7 days from the Software Update group but the Deployment Package does not get updated so expired content remains and causes the package to balloon. Running your script with CleanSources unfortunately does not touch theses ADRs so the packages just continue to balloon in size even though the expired updates have been removed from the update group.

So there’s another unmanageable maintenance task that run every 7 days and will remove an update’s content from deployment packages if the update is no longer deployed. Though I could see there being a timing issue there I hadn’t though of. If the first task removes the update entirely from ConfigMgr then the second won’t know that it’s not deployed (in theory) and might not touch it. You might want to dig into that a bit, prove that you have content from expired/removed updates, and file a User Voice Item. If that were the case, it might be possible to solve in the script but it’d be a hard thing for me to replicate and repeatably test at this point.

The CleanSources feature only deals with the source folder of the deployment package. It doesn’t happen often but sometimes the source files are left behind when something is removed from a deployment package. So the script compares the source folder to the deployment package and removes anything not in the package.

I guess, yes? There’s nothing that stops you from running the clean up wizard yourself outside of the script. However, there’s a parameter to run it after the script does all of the other WSUS stuff so I don’t know why you would want or need to.

I’m somewhat confused about the error message “Currently, this script must be ran on a primary site server. When the CM 1706 reaches critical mass this requirement might be removed.”
I’m somewhat new to domains, but as far as I knew the server was its own primary site server? It’s the only Windows Server so I’m not sure what it’s looking for.

If you think through all of the things the script does it needs pretty much god-level access. So there’s a check in the script to make sure it can read the WMI classes that exist on a primary site server. There was something released in 1706 that would theoretically make it easier to run elsewhere but even then I’m not quite sure where the value is in running it elsewhere. So what that warning means is that you’re not running it on your primary site server and you need to.

Right, that’s my question: if the only server in the domain is not a “primary site server”, then what do I do to make it so? I ran the script as a Domain Enterprise Admin on the local console (i.e. Session 0) and it spit out that error message. If that’s not God-level I’ll try impersonating SYSTEM, but I doubt that would help…

If you just have WSUS and don’t have ConfigMgr, try this (I only have WSUS)

The bug I found was in the config file parsing section. \d will find an integer in hostnames that have numbers in them, which I imagine a whole lot of folks do. This causes the config parse to fail for any string with a number in it that’s in the config file and you won’t get the config to realize you only have a single WSUS server.

then try to change that line in the script and see if it works. if you want to use the config.ini, you also need to not pass any other command line options to the script or it will ignore it from what I remember.

Yep, remove that check and it’s going to assume you’re running ConfigMgr and try to call into it to get all sort of info. You need the WSUSStandAlone feature to work. In theory, if you simply specify everything on the command line instead of the config file it should work as-is. But I’ll try and get the fix out soon.

Define slow. The log is pretty verbose; where’s it sucking up time? If you have a lot of declined updates … sure … it’ll take more time to run the script.

The source field is really just my attempt to indicate why something is deleted or declined and goes into the output file. In theory there might be some other reason to delete an update other than because it was declined.

So the problem here is that the language configuration for ConfigMgr/WSUS doesn’t quite match the language field on the actual updates. The script looks at the config and does the best it can while erring on the side of safety. If you want to get really specific it would be pretty simply to write a plugin that looks for specific language codes that you define. Just be sure to also keep updates labelled as ‘all’ since that’s the majority.

Bryan, first thank you for the amazing job you’re doing with this script, which I found just recently.
However, I can confirm that neither of the plugins are working as they should – they always return zero updates to decline to the main script. I tried to look into it and here are my thoughts:

In each of the plugins you’re referring to a variable $Updates – I searched through the whole source of the main script and the only place where that variable is defined is inside the IF that sets the maximum runtime for updates (line 1900). I suppose that’s not enough, isn’t it? Could this be the reason plugins don’t work?

You are correct, I had some time to look at this yesterday and I once again screwed up a Git upload/merge. I apparently suck at those and plan to migrate to some real tooling before starting any new serious development. I should have another release out today with the correct plugins.

THe bug I found was in the config file parsing section. \d will find an integer in hostnames that have numbers in them, which I imagine a whole lot of folks do. This causes the config parse to fail for any string with a number in it that’s in the config file.

Oh, it’s on GitHub in my ‘super secret’ public repository. I haven’t really advertised though for reasons that probably aren’t good reasons. Mostly that I don’t want people running versions that are WIP in their prod environments.

Brian is there anything specific you have to do remove the inplace upgrade to Windows 10? I copied the plugin from disabled folder to the plugins root and executed but it does not detect any IPU titles even though they appear in my WSUS.

I’m assuming you’re talking about the new Decline-Windows7IPUs plugin? That one should work without any need for modification. In your WSUS console do you see ‘Windows 7 and 8.1 upgrade to Windows 10’ updates?

Yes they do. When I ran the script it comes back with 0 items but the count is about 50. I also have the Office 365 plugin running and set to leave only the current release but all the releases are still listed in the console.

Yet my console still shows all the deferred channel, first channel releases and multiple versions. I extract some of the code just to get $updates and then executed both the Decline-Office365Editions and the Decline-Windows7IPUs. Both $Office365Updates and $WindowsIPUUpdates returned values. It seems to me the information collected is not being passed back to the main script when it executes the Plugin.