The built in reports from WSUS are adequate if you’re satisfied with only the information reported to it from the Windows Update agent of each client. But what if you want to include other information in the reports as well, like the computer description of each WSUS client?

Since I’m not a very good programmer I didn’t really want to fiddle around with the WU/WSUS API:s, and I’m not even sure the computer description could even be reported to WSUS by modifying the API:s anyway.

My go-to scripting language, being a Microsoft guy, is of course PowerShell. Could I perhaps achieve my goal by writing a PS script? Turns out I could. The full script is typed out at the bottom of this post and here is a download link in case the formatting gets all weird if you copy/paste it.

In writing this script I discovered the awesome PowerShell ISE plugin ISESteroids (http://www.powertheshell.com/isesteroids/) which helped me to format my code properly. I’m not a PowerShell expert at all so I take all the help I can get.

On to the meat of this post. I wanted to get the count of ‘applicable/not installed’ patches for every computer from the WSUS database. Doing this in SQL Server Management Studio, and since WSUS was installed with a WID, requires this connection string if the WSUS server is installed on Server 2012 (R2):

\\.\pipe\MICROSOFT##WID\tsql\query

When connected I ran this query, which includes update installation states ‘Installed’, ‘Installed Pending Reboot’, ‘Downloaded’ and ‘Failed’:

This code gives back a dataset ($SQLDataset variable) from which we can extract data. So how do we do that? Since I wanted to pull the AD description of every computer returned by the SQL query it made perfect sense to insert a foreach statement to my code.

$Data = foreach ($row in $SQLDataset.Tables[0].Rows)
{
# to make it a bit tidyer we set new variables based on each "row" from the dataset
[string]$ComputerName = $row[0] -replace "ad.exampledomain.com","" # we need to remove the FQDN to be able to search AD for the computer name
[int]$NumberOfMissingUpdates = $row[1]
[string]$IPAddress = $row[2]
[string]$LastSyncTime = $row[3]
[string]$OSArchitecture = $row[4]
[string]$OperatingSystem = $row[5]
[string]$ADDescription = Get-ADComputer -Identity $ComputerName -Properties Name,Description | Select-Object -ExpandProperty Description # since we want the AD description for every computer we need this
# create custom object for putting compliance data in
New-Object -TypeName PSObject -Property @{
ComputerName=$ComputerName
NumberOfMissingUpdates=$NumberOfMissingUpdates
IPAddress=$IPAddress
LastSyncTime=$LastSyncTime
OSArchitecture=$OSArchitecture
OperatingSystem=$OperatingSystem
ADDescription=$ADDescription
} |
Select-Object ComputerName,NumberOfMissingUpdates,IPAddress,LastSyncTime,OSArchitecture,OperatingSystem,ADDescription # Select is needed to list the data in the correct order in the report

So what data do we get back from this? Let’s have a look in the $Data variable.

Awesome! So now we have all the data we want in a custom PS object that we can manipulate in whatever way we’d like. My goal was to put this data into an Excel spreadsheet that could be given to management whenever they wanted information about patch compliance.

Everyone who’ve ever set out to export PowerShell data to Excel have probably found it as frustrating as me. Sure, you can create an Excel COM object, create a workbook and worksheet and start to add the information. The drawback to that is that you actually need Excel installed on the computer from where you run the PowerShell script. In my case it wasn’t an option to install it (and I wouldn’t have wanted to anyway) on our WSUS server so I had to find another way.

Using this module it’s as easy as piping the $Data variable to an ImportExcel cmdlet. To make things even easier there are parameters to the cmdlet which let’s us freeze the top row and set other user friendly options, making the Excel sheet ready to open and just scroll through the document from the get-go.

The data is exported to an Excel document specified by variables which I’ll show below when I post the full script.

Writing this code quickly got pretty messy, as most scripting is before cleaning up. I have a tendency to sometimes use cmdlet aliases instead of the full cmdlet name. Doing that in a script is bad practice, even if you don’t intend to share the script with anyone else. Doing it right from the start will get you into a habit of writing good code without having to do much cleanup after.

I decided to try out the ISESteroids module that I mentioned at the beginning of this post and I was surprised at the amount of awesome features it contained. I haven’t begun to explore them all but I thought I would share my findings on this blog. Just follow the installation instructions on their site.

First I’ll post a screenshot of a part of my script as it looked before letting ISESteroids sink its teeth into it:

In my opinion the code isn’t all that bad, but it could definitely be better. So what are some of the things that ISESteroids can do? To help us mere mortals there is an Auto-Format add-on that can be enabled which puts it next to the native Command add-on tab.

Using this we can, among other things, automatically replace aliases with their full cmdlet names. Let’s try it out by clicking Apply.

Before:

After:

This is applied to every cmdlet alias in the whole script!

Another thing that I hadn’t thought about regarding my script was that I’d put data in a variable that wasn’t even used later on, i.e. totally unnecessary. ISESteroids alerts us to this by “dimming” the variable.

Hovering the mouse over the variable will explain. Right-clicking the variable and choosing PSSharper –> Manage Issue will open another add-on tab and give you options about what to do. In my case I just deleted that part of my code.

Another great feature when testing scripts is the extra “Run Script” button. Right-clicking it will let you run the script in different contexts and with different PowerShell versions. Very handy when trying to make sure the script will run as expected.

Being able to create an application (exe) from your script is another feature that’ll be very useful if you’re creating script tools that are intended to be executed by other people, for example the helpdesk guys.

Clicking the admin privileges checkbox will trigger a UAC prompt when the exe is started.

I’m going to try to put all this together now. This post got a lot longer than I expected when I started writing.

Running this script will now create a nice report and email it to me and management every week so we can follow up on patching that’s been done throughout the week.

This script doesn’t contain any error handling and is provided as-is. The account used to execute the script needs at least read access to the SUSDB database. Whenever I have time I might update the script here and make it more robust.

I was searching today because I wanted to find out which of my servers that didn’t have a MW applied to it. I could only find blog posts telling me how to list computers that do have MW’s applied to them, so I fired up SQL Management Studio and put this together real quick and figured I might as well share it with the rest of the internet (those who find this anyway). The query is a subselect query, i.e. it gets the servers who are not included in the query enclosed in parentheses. Notice I have filtered ‘Operating System Name and Version’ to scope it to servers only. Just remove both those to list all computers. Also I have a non-recurring MW with a start date in the past targeted to servers, to prevent accidental deployments. This MW is also filtered out. Remove that part of the subselect query or replace it with your own.

SELECT
SYS.Name0 AS 'Computer Name',
SYS.Client0 AS 'Client Installed?',
SYS.Operating_System_Name_and0 AS 'Operating System'
FROM
dbo.v_R_System SYS
WHERE
SYS.Operating_System_Name_and0 LIKE '%server%' AND
SYS.Name0 NOT IN
(
SELECT SYS.Name0
FROM
dbo.v_ServiceWindow AS SW INNER JOIN
dbo.v_FullCollectionMembership AS FCM ON SW.CollectionID = FCM.CollectionID INNER JOIN
dbo.v_R_System SYS ON FCM.ResourceID = SYS.ResourceID
WHERE SYS.Operating_System_Name_and0 LIKE '%server%' AND SW.Name != 'Old MW'
)
ORDER BY SYS.Name0

Determine what site systems should be placed in DMZ. In my case I chose to install MP, SUP and DP. I also wanted all these roles to only communicate via HTTPS for security reasons. This requires installing a certification authority server (ADCS), so do that first if there is none in your environment. After installing that create certificate templates and GPOs according to this guide. It’s VERY important to create the certificate templates with Windows XP/Server 2003 compatibility, otherwise client authentication will fail. Also very important to make sure the clients choose the right certificate when they are registering with their management point, more about that later.

Install WSUS. To limit traffic and needed firewall ports opened I went with a shared WSUS configuration according to this guide. If you don’t need/want this just install WSUS as usual with either a MS SQL installation or WID, and jump to step 4.

On the primary server, make sure the computer account of the DMZ server has dbo permissions to SUSDB in SQL. You need to add this computer account manually in SQL Server Management Studio by creating a new login. Map the account to SUSDB in “User Mapping”, click “db_owner” and then OK.

The DMZ server will also reuse the existing WSUS content from the primary server. On the primary server, add read/write permissions on that folder for the DMZ server’s computer account.

I also added NTFS read permissions for the DMZ computer account on the actual SUSDB.mdf file on the primary server. I’m not sure if this is needed.

On the DMZ server, install WSUS via Server Manager. Just choose WSUS Services and NOT any of the database options. For content location choose the WSUS content folder share from the primary server, i.e. \\primary\WSUS or the like.

On the DMZ server, start regedit as administrator. Go to HKLM\Software\Microsoft\Update Services\Server\Setup. Change the values for ContentDir and SqlServerName to the following:
ContentDir: \\primary\WSUS
SqlServerName: hostname of primary server

On the DMZ server, start the WSUS console and connect to the primary server. If it’s working, the shared database configuration is ok!

On the primary server, install a new site system server from Administration à Site Configuration –> Servers and Site System Roles –> Create Site System Server. Pick your DMZ server. Add the MP, SUP and DP roles. Click next until completion, we will change the HTTPS settings later. Installing the roles might take a while.

Configure WSUS/SUP to use SSL (HTTPS). On the DMZ server, open the IIS console. Create a domain certificate according to this guide.

DP: On the primary server, start the SCCM console and go to Administration –> Site Configuration –> Servers and Site System Roles. Click your new DMZ server and then right click Distribution point and choose properties.

Select HTTPS and click OK.

Right click Management point and change communication to HTTPS as well, then click OK.

Right click Software update point and put a checkmark in “Require SSL communication…”.

Configure the primary site to use HTTPS when available.

Right click the site and choose properties.

Click the Client Computer Communication tab. Mark the checkboxes according to the screenshot. Click Set and import the Root CA certificate. You need to export this certificate from the CA server first, instructions here. Click Modify to set certificate selection to your satisfaction.

This is where certificates can start to become confusing. The certificate templates that you created earlier are designed for client authentication. However, if there are other certificate templates (with other compatibility settings) published in your domain that are also used for client authentication, and those certificates are already enrolled by your clients, it can become a problem if the clients choose the wrong certificate when they register with their management point. I’ve found that the easiest solution to this is to make sure the SCCM client certificate templates have the longest validity period. If you do this the default behavior of SCCM is to automatically choose that certificate.

Make sure the DMZ management point is published to DNS.

Add timeout error code to the WSUS scan retry error list. The default behavior in SCCM is that clients pick a SUP at random and try to scan for metadata against it. It if fails it will try 3 more times and then switch over to another SUP. However, it only fails over if it gets certain error codes. A timeout error (0x80072ee2) WILL NOT make the client fail over. You will most certainly get this error so it’s best to add this to the error code list. Do this on the primary site and use this excellent guide.

Also very important is to ensure your clients choose the correct management point, otherwise they will never get the correct client policies. If you run a CM12 R2 SP1 site you can force clients to only use MPs according to their respective boundaries/boundary groups, which is great.

If you run only a plain R2 (at least CU3) site you need to get more creative and hard code management points in the registry. Check out this guide and implement a GPO or something to add these registry settings to your DMZ clients.

At work we’ve had intermittent issues with SCCM’s remote control functionality for a week. Some days it seems like we can connect to every machine and other days it felt like it was a 50/50 chance of it working. We’ve used workarounds (Lync/SfB screen sharing) since no one had the time to actually dig into what is happening.

Yesterday I had enough and started looking.

I tried to connect to a machine I knew previously have had issues. It did’t work this time either. First thing to check is obviously log files. From my computer, the machine initiating the remote control request, I went to %TEMP% which in my case resolved the path to C:\Users\username\AppData\Local\Temp but it’s common to also resolve to a subfolder to AppData\Local\Temp. I opened CmRcViewer.log in CMTrace.exe (the preferred log file reading tool).

Big red errors. But “Unknown error” is not very helpful at all.

CMTrace has a wonderful error lookup feature which will translate many hexadecimal error codes (and sometimes decimal also I believe) into more meaningful text. Looking up the error codes 8007274C and 80072AF9 gives the following back:

1. “A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.”

2. “No such host is known.”

Pinging the remote computer worked and DNS resolved correctly as well, from both my computer and the SCCM server, so it couldn’t be that.

The more general error message listed in clear text below the two “Unknown error” kinda shoots from the hip and lists the most common causes of not being able to connect. In this case it was actually right. The port used for remote control, 2701, was not reachable from my computer to the remote computer. I found this out by running the PowerShell cmdlet Test-NetConnection.

Test-NetConnection -Port 2701 -ComputerName RemoteComputerNameHere

Of course you can use the old school telnet client or Putty as well.

telnet RemoteComputerNameHere 2701

So why wasn’t the port reachable? First thought for me was that it had to be a firewall issue. There is no firewall between the two subnets, but the Windows Firewall is enabled on all machines in our environment.

I logged on to the remote computer via RDP instead and started poking around. The firewall exception was there. Why wasn’t this working then?

What I found out was that the network location settings was in some kind of limbo on the remote computer. It was spinning on “Identifying” even though the computer had been turned on for more than 24 hours.

PowerShell to the rescue once more (even though I of course could’ve opened services.msc since I RDP’d into this machine anyway, but where’s the fun in that?). I simply restarted the service.