Kevinisms

Hello and Welcome to the Blog of Kevin Fason! This is my Day to Day Technical Journal. Currently I am the End User Computing Architect for a large Engineering firm in Denver. Had various roles over the years from Communications (PBX & Voicemail) , Administration, even IT Global Manager. Deployment is a big part of my mindset (OSD, MDT, going back to dd) so I have come across lots of scenarios and issues working for a firm that's on all the continents and zillions of countries.

Friday, January 11, 2019

As my firm got purchased by another I am now starting to collapse the ConfigMgr environment. As it was designed to service 50K endpoints before breaking a sweat, it now manages a couple thousand systems that have not been migrated yet so is way overkill now. As we also use 1E Nomad all we have are the back office roles to contend with. First was to downsize the primary site to remove the multiple MP/DP/SUP and then SUPs on the secondaries. After that, I was to start collapsing the secondaries and converting those hosts to be only DPs. Eventually, this instance will drop to a couple of servers to be kept around for historical use.

So after uninstalling the secondary and cleaning up SQL etc I installed a DP role on it. Once done I started injecting the prestaged content, however I started seeing the following errors in the DP's smsdpprov.log.

PXE was not flagged during install so after double-checking settings I ran off to Google to see what others have around this. All I could find was people saying to live with it. Seemed strange as this was not on other DPs I checked, so I thought I'd try a quick PXE enable then disable on the DP. By quick I mean enable it, come back later and disable once I validated it was installed via smsdpprov.log and distmgr.log. Sure enough, smsdpprov.log only shows 'PXE provider is not installed' occasionally so the problem is fixed.

Now if it will finish hashing the prestaged content so I can update its boundry group so it will serve its purpose and move onto the next one!

Monday, October 1, 2018

So I have new corporate overlords as my firm was procured by a larger fish. Their ConfigMgr environment was still 2012 R2 and 2008 R2 Windows Server so one of my projects was to determine path forward for SCCM and a pre-req was to get them to Current Branch. This was around the 1710 version timeframe. We had to choose to upgrade ConfigMgr then the server OS or Servers first then ConfigMgr. While I had 16xx Current Branch media around for the former, we elected to move all the roles to Windows Server 2012 R2 and then once all were done we would upgrade Configuration Manager itself. Sure I'll have notes around that.

First up was the SUP/WSUS environment for which there were several servers. Below are random notes, not a howto, from our experience of completely removing SUP and rebuilding from nothing on new 2012 R2 servers as well as preparing them for new features in Current Branch. I would expect some of these to be resolved in newer Server OS and/or ConfigMgr.

Before uninstall:

When you make note of current SUP settings such as products and classifications be sure to clear out the products and classifications in the SUP role. Leaving them will cause SUP to not sync from WSUS until you do a clear, sync, then setup again, and sync.

Goto HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Update Services\Server\Setup and copy SQLServerName REG_EXPAND_SZ so you have the current SQL server and instance. Useful if its on the primary site SQL server.

Backup the SUSDB just in case

On the SUP server, delete SUPSetup.LOG and WSUSCTRL.LOG from COnfigMgr Log dir so you have clean logs to start with. This was more around the first bullet as we had to remove to nothing again and start over with empty settings.

For uninstall:

When you uninstall the WSUS role, also uninstall WID role on the server if installed.

Delete SUSDB from the SQL server as new server version uses new WSUS database version. We were going from 2008R2 to 2012R2 WSUS.

For Install

During install of WSUS role it will enable WID even if you select SQL server. On Feature dialog uncheck it if you are using full SQL. Otherwise uninstall after WSUS installed. We put on Primary site DB since it is supported as well as us using the Enterprise Edition of SQL.

If using a Shared DB enter the path to the root share, ie \\servername.domain.com\WSUS. It will add the WSUSContent folder onto that. We put it on the first installed WSUS/SUP server.

For the other servers give their machineobject full rights to the shared WSUSContent folder for the filesystem and/or share.

After install of WSUS role go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Update Services\Server\Setup and modify SQLServerName to SQL server/instance as it defaults to WID (MICROSOFT##WID) even if you select database to use full SQL. Do this before you start the configure task in Server Manager otherwise it will fail out. Good idea to modify to FQDN if not using it.

After configuration task goto SSMS if using full SQL and configure

The account you installed with is the owner of SUSDB, change to sa

set database file to 100MB growth unlimited for DB and 50MB growth max 10GB for logs. Should plan on approx 20GB WSUS, but that is pure WSUS.

After inititial setup in Roles, go into WSUS console to Options | Update Files and Languages | Update Languages and match what was chosen in SUP setup such as EN only. It will be set to Download all languages. SUP did not change this when it was configured later.

in IIS manager | Application Pools | WsusPool | Advanced Settings

Private Memory Limit = 0

Queue Length = 25000

For Database sharing you need to configure all servers to also share the ContentDir. In IIS Manager | Server Name | WSUS Administration | Content | Manage Virtual Directory | Physical Path. Be sure to use FQDN here. It forgets the slashes.

Post Install

We did not have this problem but expected it due to how long the existing SUP/WSUS was in place. You might have issue with catalog version.

Tuesday, September 4, 2018

A friend of mine came to me as he noticed he had several KMS servers in his environment, yet should only have the one as he also has ADBA setup for the newer Operating Systems. In digging it turns out these others were Windows 7 laptops. We are guessing the users did something to try and change Editions perhaps but definitely intentional. Instead of putting in tickets with his support team we chose to fix it via ConfigMgr ourself. Note this was a focused case so did not spend much time at all to make it more automatic so some expectations were made. If this issue resurfaces I'll do something with compliance settings to really automatically handle.

First off you can find out who is advertising as KMS by doing a SRV record lookup in DNS for _vlmcs._tcp.mydomain.com. nslookup is easiest, though you can use the DNS tool in RSAT and dig down via Forward Lookup Zones | mydomain.com | _tcp | and you'll see all _vlmcs.* records.

First thing we did was remove the bad records from DNS leaving only the one good one. Since it was only a few systems acting up we created a collection and added these hosts as direct members. Then created a package with the following script to run on them and rerun weekly. Eventually, they went away. We kept an eye on the DNS records and removed as needed which was one time.

The script consists of several steps and its all ran via slmgr.vbs. The first step uninstalls whatever key the system has present:

We were lucky in that it was only some Windows 7 systems doing it however if it was other versions we would have to detect the right OS so we use the correct KMS key. Since we were just using a shell script I was originally thinking of just using ver to pull it then an if-then statement for the /IPK step. Something similar to this.

for /f "tokens=4-5 delims=. " %%i in ('ver') do set VERSION=%%i.%%j

Since the system is now unlicensed we have it activate against the valid KMS.