Invoke-DGASoftwareUpdateMaintenance

1 file(s) 48.85 KB

Multi-Site/CAS Support

I’ve slowly but surely revising the maintenance script I wrote and released last year and occasionally updating it as things are pointed out. I hope that by now multi-site/CAS environments are supported. If you run such an environment please reach out and let me know if you have any issues. Unlike users with a single site you may need to specify the site code when running the script.

Stand-Alone WSUS Support

I had a couple of questions or requests regarding stand-alone WSUS support. While I really do think AdamJ’s maintenance script is better than mine in this regard there are a couple of benefits of running mine in tandem. Mostly the logging (logging is godliness people) and the plugins. So the script now supports three additional parameters: StandAloneWSUS, StandAloneWSUSPort, and StandAloneWSUSSSL. There’s logic in there to make sure the rest of the parameters make sense and that you don’t try and do anything ConfigMgr related when StandAloneWSUS is specified. See the full documentation here for more info.

I’ll warn you that I don’t plan on testing this use case extensively so if you encounter issues let me know and I’ll do my best to address them.

Forgive Me Father For I Have Sinned Support

The other problem users reported encountering early on is that the script would time out. Running this kind of maintenance script has a catch 22 built into it. In order to maintain the update catalog you need to get the catalog from your WSUS server. That tends to timeout in an environment that has never seen maintenance. Which of course is the whole problem you’re trying to fix in the first place. As if by magic, such environments have seemingly popped out of the woodwork last week. In such environments manually running the WSUS Cleanup Wizard from within the WSUS Console tends to time out and crash at some point … maybe hours/days into the process. To get the script to run successfully there’s a couple of things you can try. First, restart IIS and run the script ASAP when it comes back up. If that doesn’t work you can also try and block network traffic to the server so that WSUS isn’t trying to respond to clients while you try to maintain it. If that doesn’t work …

I’ve added a FirstRun parameter that directly calls the stored procedures laid out in MS’s compleat guide to yadda yadda blog post. When used, it will connect directly to the WSUS database (SUSDB) and get a list of obsolete updates using the spGetObsoleteUpdatesToCleanup stored procedure. It then loops through and deletes each one using the spDeleteUpdate stored procedure. This mimics what the WSUS cleanup wizard does but with a 30 minute timeout instead of the default 30 second one. Be aware that this may take hours, days, weeks, or even longer to complete. However, it most likely will complete unlike the wizard. Afterwards you should be able to run the script normally.

The FirstRun parameter should be considered experimental at this point. I know it works but none of my environments have any obsolete updates to really test it against. You know … because I’ve been maintaining it like a boss.

You might try a UNC path, yes. Is J: a mapped drive? Because if this is running as system it won’t have your mapped drives or maybe access to various network resources. Really, I’ve only tested saving the log relative to the script itself. You can leave that commented out and it will save it alongside the script.

It’s a local drive, not mapped & the lastran file is being created so I’m not thinking it’s a permissions thing. The script is being run as a scheduled task by SYSTEM. Commenting out the line has made it work though, thanks!

I’m using your script now alongside the above set of changes. It changed the rate from 34 or less/hour to thousands an hour. Thank you for your script. I’m really hoping it helps my cluttered disaster WSUS.

Not sure what I’m doing wrong here. I’ve got an 1806 install, running the script on the primary site server, WSUS is installed on a different site server. I had previously set up the status filter rule incorrectly, so the script wasn’t running. I fixed the problem, and it started running. However, I noticed I got a time-out error on the WSUS cleanup wizard. So I decided to run it manually with the following command:

I was mostly concerned about running the cleanup wizard and getting it to finish successfully. I could let it get back to running normally with the status filter afterwards. But even running it like this, I can’t get it to not time out. It times out in under 5 minutes. Am I just missing something? I can send logs or config files if necessary.

Hi Bryan, I am trying to use the plugin “Decline-WindowsX86” to decline all Windows 10 x86 updates but it doesn’t seem to be working. Below is the line I edited in the script to do so. Thanks in advance!
$SupportedWinX86Versions = @(‘Windows 7’)

Hi, love the script. Been working the plugins in to my schedule today and kind of confused with how $SupportedUpdateLanguages in Decline-Windows10Editions.ps1 deals with the locales. I’m in the UK and want to expire the en-US updates, just keeping the en-GB ones. The default entry is “en”,”all” but when I change it to “en-GB”,”all”, no more updates are expired. Am I misunderstanding? Many thanks.

The problem boils down to a mismatch between the language codes used by ConfigMgr, WSUS, and the updates themselves that I’ve never quite understood and so played it safe. I forget the nitty-gritty of it but you’d have to look at the actual language codes on updates themselves.

When used with ConfigMgr it pulls the array of supported languages from ConfigMgr itself. The problem is that they don’t directly match the language codes on the updates themselves. So the script gets a array of language codes and then determines if any of them match one of the language codes on the update. If no match is found, the update is declined. So if you want to do it by hand you need to create an array of the language codes that you want to keep. Those language codes need to match whatever codes the update metadata is using.

I’m running WSUS standalone from 2012 and I might need your advice. Last month I’ve managed to completely migrate our whole environment to Windows 10 1803/1809 ..WSUS was for years maintaining our Windows7 workstations.
I really don’t need to keep up any records of Windows 7. I’ve already removed it from new Synchronization as product class

Question is, how to do proper maintenance:
I’d ran your script. On the first run it managed to remove over 70GB in stored WsusContent directory. This is great but I expected much more space will be available.
Log file says:

I’m not quite sure what your goal is. I’m assuming you were hoping to reclaim more drive space. Which, for me, I’m rarely too worried about unless we’re talking hundreds of GBs … disk is cheap. To me, given the cumulative updates for Win 7 … 70 GB doesn’t sound too far off. I have not real basis for thinking you ‘should’ get more but if you want to try you would clear out the content folder, do a ‘wsusutil /reset’, and redownload everything. Or sure … just rebuild WSUS. Neither of which I personally think is worth it … but you do you.

That’s the thing.. I’m not sure if Windows 7 updates has been removed at all
I’m not having that much problem with fee space. But 7year old database with 67K not needed updates sounds a bit much to me. Also, after clearing those 70G i do feel performance improvements

Firstly, thanks for all your hard work. Loving this script after the couple of days I’ve been using it.

Just after a bit of help. I’m not a scripting guy tbh, I can work my way around a script but my debugging skills are minimal.

I am having an issue with the plugins. I was using the Win10 language, version and edition plugins and hadn’t configured them right, this caused all Win10 upgrades to be declined when I ran the script. I’ve corrected the errors (so I believe), went into WSUS and set the updates I want back to “unapproved” and ran the script again but they keep getting declined. I’m unsure if this is due to them originally being declined and there’s something I have to do to “default” them or because there are still issues in how I’ve changed the plugins.

The only changes I’ve made in the plugins are as follows

Editions Plugin

#Un-comment and add elements to this array for editions you support. Be sure to add a comma at the end in order to avoid confusion between editions.
$SupportedEditions = @(“Feature update to Windows 10 Enterprise,”,”Feature update to Windows 10 (business editions)*,”)

I am trying to run this on a standalone WSUS (SBS 2011). When I run the script I get the error “The Update Services module was not found”. I have looked for a way to install the update services module so I can get the powershell commands needed for your script, but I can’t figure it out. Do you have any info on this? Thanks for your time.

Thanks for this script, I’ve been learning a lot just getting things implemented and reading background info on WSUS and why this is so helpful.

I just gave this a shot and everything seems to run correctly (logs don’t show any errors). However, I feel like I should be reclaiming a LOT more disk space back. Our environment hasn’t been well maintained, ever. After doing a firstrun (which I did not do using PSEXEC, theoretically I have all permissions needed or so I thought) and a normal run (this time using PSEXEC to see if it made a difference) the current size of the WSUS source folder is 438gb down from the initial size of 477gb. I’m happy to get the 40 gb back but I would think it should be a LOT more, considering the log shows “Total Newly Deleted Updates: 16742”.

I’m having a hard time finding examples of how much drive space people have recalimed from the firstrun on an out of control WSUS source folder. Any thoughts or advice appreciated.

I’m assuming that you’re using a stand-alone WSUS instance? As used by ConfigMgr you shouldn’t be approving updates and therefore it shouldn’t be downloading stuff into the WSUS Content folder at all.

Further clarification: your saying the log showed 16742 _deleted_ updates? Not declined updates? If so that’s kind of a problem, it shouldn’t be deleting updates on the first run. That should only happen after running the script for a while. The script tries to track when it declined an update and then will delete it after a set number of months (default is 3 I think)

For now I’m going to assume you are using stand alone WSUS and have declined 16742 updates. If so, it could take a while to reclaim that space. Simply declining them isn’t enough. You then need to run the cleanup wizard to cleanup the actual disk space. I’m not sure if there’s any kind of wait time needed before the wizard will delete declined updates. Either way, experience suggest that the wizard isn’t all that great in that regard. Every once and a while you just have to burn it all down and start again by deleting all of the content and then use wsusutil /reset to re-download all approved content.

Thanks so much for the reply! No – we are using WSUS in conjunction with SCCM. I inherited this whole setup so it’s totally possible that it was misconfigured. I am not knowledgeable enough to know yet what if anything is configured incorrectly, I just saw how out of control the update content folder is and started looking into how to get it fixed… I am the defacto SCCM guy around here but I’m totally self taught and learning on the fly as best I can. My internet research for patch management in SCCM led me here. I previously used the WAM script on a stand-alone WSUS server and had good luck with that. It’s also totally possible that someone set up SCCM and named the patch source folder “WSUS_Sources” but in reality it’s actually the folder of patches that are co-managed by WSUS/SCCM. As I said, I’ve been reverse engineering what has been done and trying to clean up as best I can.

In terms of what I’ve done so far and the results:
I ran your script twice with firstrun (once as admin powershell session, once using PSEXEC as you described). Both done from the site server. Not sure if PSEXEC is needed when I’m running as admin directly on the server since PSEXEC seems to be for elevated remote execution but again, self-taught, so I tried it with PSEXEC since you recommend that in the instructions.

I also ran once without the firstrun tag.

No one here is approving/declining updates manually. I double checked and it looked like there were roughly 10 updates that were “approved” status in WSUS. They were all super duper old/irrelevant (win 7, 8, itanium) so I manually selected them and declined them.

As far as I can tell WSUS is set up correctly so I’m thinking perhaps the way ConfigManager is handling updates might be set up incorrectly. I’m going to start researching how that should be set up now. Any other thoughts or advice is appreciated.

Ah, maybe I should have asked for clarification by what you meant by ‘WSUS source folder’. I assumed you meant the WSUS content folder which should only hold the EULAs since ConfigMgr handles the downloading and distribution of the content. So what you mean is that the source for your ConfigMgr deployment packages is over 400 Gb? If so, that kind of explains it. The script is unlikely to immediately shrink that content. What it will do is decline updates in WSUS, sync updates so that those declines updates are now expired in ConfigMgr, and then remove all expired updates from your SUGs. After that, an only after that, there are background processes built into ConfigMgr that run every 7 days which will remove any unneeded/undeployed updates from your deployment packages and their source folders. So, in theory, just wait a week or so and you should see that come down even further. I’d suggest looking at the number of updates in your deployment packages and then comparing in a week.

And yes, using PSEXEC isn’t strictly necessary. The script requires a _lot_ of permissions and the one thing that’s practically guaranteed to have everything the script needs is the computer account of your site server.

I think the WSUS source folder is actually out of control not just the SCCM content folder, but I think I figured out why… we had a 3rd party patch management plug in and it was configured to download all languages AND the updates were not being marked as expired in any automatic way. I went through and manually expired a bunch of them and saw an immediate chunk of space get freed up. I think as you said I’ll see another big chunk freed up after SCCM does its weekly clean up. Thanks again, truly appreciate everything you do for the community and being so generous with your time!

Ah yes, that’ll do it. Your 3rd party needs to download the content into the SoftwareDistributionFolder. Are they superseding updates for you? That’s something I was able to convince (along with others I’m sure) Patch My PC to start doing because it helps solve this problem. They should eventually expire and remove older updates from their catalog too which, if memory serves, Adobe does with their catalog.

Your script works perfect. i have just one question?
How to filter out the updates for ARM64 based systems (like KB4456655)
is it just adding a name to one of the scripts or do you need to edit the scripts ?

When I run the script I am using the same switches that you have documented however what I have noticed is while the expired updates no longer appear in my list of All Software Updates when I look at the deployment packages the expired updates still remain.

Is there an easy way to clean these out? They show downloaded but not deployed which I guess is a good thing and I’ve checked the membership to see if the updates are part of a update group and they are not. It would be great to have a way to clean up the content stores as well as it appears I still have the content floating around out there in the ether and would really like to clean up these groups.

The script doesn’t currently do that though it wouldn’t be hard to add. However, there is already a background maintenance process built into ConfigMgr that does exactly that. So if you do nothing right now they’ll be removed from the deployment package within a week or so.

What the script does do is compare the package source folder to the updates in the deployment package and removed any source data that’s been orphaned for whatever reason. In my experience it’s pretty rare for that to happen.

Curious as our deployment package for the ADR for Defender Updates has expired updates in it dated back to January, so even thought they were expired and they were still in the deployment package. Is there a way to kick off the maintenance process manually to see if something is amiss?

Maybe its just us but I’m finding a lot of orphaned data still in the document packages. In most cases I have been able to reduce our footprint one some of the older packages significantly.

I’m not a programmer, so I didn’t want to bother you, but yeah, I couldn’t find anything about standalonewsus when searching through the script. I was wondering if this was an upload of an older version too. Thanks for your help and work on this script!!

Ok, nope, I’m the idiot here. Somewhere along the line I screwed up a merge and overwrote the version of the script that has the stand alone WSUS functionality. I’ll have to sort that out and will reply here when that’s resolved.

Hey man, not gonna lie, this is pretty awesome. However I’d like to see a “LogOnly” parameter or something that actually just spits out what it “would” decline/remove/change vs. actually changing it and output that to a specified file for review.