Next up, I’m going to use the Description field of the Patching Group in AD, along with the schedule detail from Get-PatchDate.ps1 to build another object we can pull from later as we start to do things like:

Send an e-mail with the details of the patching to the server owners for review

Let those owners know when the window(s) will occur

Use the same information when we start telling SCCM to create Deployment Packages with Maintenance and Deadline windows using the schedule information.

First, the code:

#
# Created By: Avram Woroch
# Purpose:
# To obtain Patching Schedule information, which is contained in the Description field
# of the Patch Group object in AD. We are assuming a group name of:
# SRV-S0-PATCHING-PROD1A, SRV-S0-PATCHING-PROD2B, etc.
# also we are assuming a Description field that contains 3 fields, delimited by ^ in the format of:
# <Whatever>^<PatchWindowStart>^<PatchWindowEnd>
# We don't store the patch day of month here, as we may need to do one-off patching
# We are then left with an $Object called $objPatchScheduleList which contains:
# $PatchGroupName $PatchingDate $WindowStart $WindowEnd
# Usage:
# Get-PatchingScheduleInfo.ps1
# MODIFY THIS VARIABLE - the -like "name" shoudl be the common name for the SET of patch groups
$PatchGroups =get-ADGroup -filter {Name -like "SRV-S0-Patching*"}
# Create a custom object that contains the columns that we want to export
$objPatchScheduleList = @()
Function Add-ToObject{ $Script:objPatchScheduleList += New-Object PSObject -Property @{ PatchingGroup = $args[0]; PatchingDate = $args[1]; WindowStart = $args[2]; WindowEnd = $args[3]; } }
$PatchingDate = ""
# Loop through each of the groups
ForEach ($Group in $PatchGroups)
{
# Search computers and get their Name and Description
$PatchGroup = Get-ADGroup -Properties description $Group | Select Name,Description
# Store the resulting server name
$PatchGroupName = $PatchGroup.Name
# Split the group name to get the unique portion we commonly refer to it as - eg: PROD1A
$PatchGroupTemp = $PatchGroupName -split "-"
$PatchGroupSet = $PatchGroupTemp[3]
$PatchGroupSet = $PatchGroupSet.Substring(0,$PatchGroupSet.Length-1)
$PatchingDateTemp = '$PatchDay'+$PatchGroupSet
$PatchingDate = $ExecutionContext.InvokeCommand.ExpandString($PatchingDateTemp)
# Create a $Desc array and use -split to use the delimiter to break apart the variables
if ($PatchGroup.Description) {$Desc = $PatchGroup.Description -split "\^"}
# WindowStart is Field1 after -split
$WindowStart = $Desc[1]
# WindowEnd is Field2 after -split
$WindowEnd = $Desc[2]
# Send those dtails out to the object definied earlier
Add-ToObject $PatchGroupName $PatchingDate $WindowStart $WindowEnd
}
$objPatchScheduleList

This isn’t a lot different from the Get-PatchDetails, and the same sort of logic is used. Build an object that we can reference later using existing data, and split apart some fields to make them more readily usable later on.

As you can see I’ve populated this with dummy information, but I can revise later.

Some things I think of now as I look at it, but want to stop messing with it because it works:

I probably should store the “Short Patch Name” – eg: “PROD3B” in a column, might make the rest of the work later on a few less steps

I know I’m going to have situations where the WindowEnd is the next day in the AM – eg: 22:30-04:30. I don’t yet know how I’m going to factor for that. Probably do some logic that says “if $WindowEnd < $WindowStart, $WindowEndDate=$PatchingDate+1”. We’ll see. I may find outt that WindowEnd is better suited as WindowDuration with the # of hours. But I wanted to make it easy to have in the Group Description Field

I have this feeling I might want to use actual AD Schema, but I’m not sure if it’s as maintainable as just telling someone to “Edit the Description”. It also means that the Description becomes pretty dependent, and someone modifying it without knowing that it used for this, might break it. In that event, one might run this script nightly and export the object to a CSV, so if someone ever DID mess up the Descriptions, you could VERY easily refer back to what they were at the time. There’s many other ways you could deal with that though…

Anyone who knows me, knows that if I have to do something 3 times, I’m going to do two things:

1) Try to automate it

2) Get angry at you

Lucky for me, anger leads to productivity

The PowerShell that follows allows me to get the dates for patching for a site I’m doing work for. It would also work for at least two others that I’ve done similar work for, so it’ll definitely be of some broader use than just one site.

The general gist of the script is to find the dates for updating. This site does their updates with the following schedule:

DEV1 group happens on the First Thursday of the month – this covers a middle of week testing during business hours

PROD1 group happens on the Second Saturday following the DEV1 group – 9 days later. This handles systems that can be updated in the evening on a weekend.

PROD2 group happens on the Sunday following PROD1 – this handles systems that can be done on a weekend, but might be doing some manner of processing at night – batch updates, backup servers, etc.

PROD3 group happens on the Monday following PROD2 – this handles systems that could not be updated at night or during the weekend.

The problem is that the 9 days after DEV is not always “2nd Saturday”, sometimes it is “3rd Saturday” – if the first of the month occurs on a Fri/Sat/Sun. Equally, #nd Saturday may be #nd+1 Sunday. So to try to figure this out, I found a script that gets “WeekDayInMonth”. That got me the basics, but then I still needed to get MY dates from it.

I’m still working on how to make this “Better”, and I’ll likely seek input from my resident PowerShell guru (http://pleasework.robbievance.net/) but until then I’m trying it on my own. The usage is:

“Get-PatchDates.ps1” to load the module

“Get-PatchDate” with no parameters to get the dates for the current month. Or we can specify the MM YYYY on the command line to override. But this way allows me to set the script up to run on the first of the month, and get the dates for that month.

So what we end up with is output of:

PS C:\WINDOWS\system32&gt; Get-PatchDate
In the year 2014 and the month of 11:
Group DEV1 will be patched on: 11/6/2014 12:00:00 AM
Group PROD1 will be patched on: 11/15/2014 12:00:00 AM - 9 days later
Group PROD2 will be patched on: 11/16/2014 12:00:00 AM - 10 days later
Group PROD3 will be patched on: 11/17/2014 12:00:00 AM - 11 days later

I’m going to have a bunch of posts coming up for some SCCM 2012 Windows Server Windows Updates scripting, that I hope will help someone avoid having to deal with a situation where you hear “So every month, we do this process, and it’s currently manual….”

These would also be able to be workable with some WSUS general scripting, as long as modified GPO’s accordingly in a script and/or reconfigured groups of servers WSUS registry settings.

Here’s hoping this all works…

UPDATE:

I came back realizing I was going to probably want to store the various dates as variables in memory to call in later scripts or functions in this process. Turns out, I thought of that (or perhaps, just got lucky…) when I used the variable names $PatchDayXXXXXX – which does exactly that. Later on, I’m going to use the PatchGroup Descriptions to designate a delimited Start and Stop time, so I can use those variables in my maintenance window and deadline designations.

This has been something I’ve had to keep flipping back to my notes for, so I figured I’d jot it down. Various products, such as vCenter Server, require the installation of .NET 3.5 on Windows 2012/2012 R2. Doing so, however, is not as ‘simple’ as finding the download and installing it. This component is actually a Windows Feature, and must be installed via this method.

Above is a screenshot from the Add Windows Roles and Features wizard. Do NOT check the box for Application Server and click NEXT.

If you see the above screen, prompting for .NET 4.5, then in my first step above – you actually checked the APPLICATION SERVER role, which has .NET 4.5 as mandatory. As we don’t need it, you should go back and uncheck it.

Note that highlighted yellow bar. The reason you’re seeing this is that it’s informing you that the actual components for this are NOT cached on the local system. You need to click on the other circled link, to SPECIFY AN ALTERNATE SOURCE PATH

Here you can specify the path. In this case, I’ve double clicked and mounted the Windows 2012 R2 with Update ISO so it is mounted as the E: drive – thus, I can specify E:\SOURCES\SXS as the source folder. Click OK then click INSTALL on the previous screen when it returns.

Click FINISH. While it doesn’t force you to reboot, it might be wise to both check for any Windows updates related to .NET 3.5 and reboot as well.

Some additional comments:

Network UNC paths – I’ve had mixed success with unpacking the ISO to a network share and specifying the name. Usually I’ve been short on time, so couldn’t spend the time to troubleshoot. Suspected suspected culprits are:

DFS-N name spaces give it grief

Long (>64) character UNC paths cause issues

Spaces in the path can cause grief

The MACHINE account for the computer doing the installation doesn’t have rights to the share, even if the USER does.

Command line installation – you can perform all of the above with PowerShell using the command:

So today I had a need to set SNMP parameters for all ESXi hosts in vCenter. Easily enough done at the SSH command line:

esxcli system snmp set –communities nw_public –enable yes

That’s going to set the community and enable SNMP. Everything else is default, and we’re not setting up any sort of security. SNMPv1/v2 don’t have security or encryption, vSphere v5.1 only supports setting a remote IP for a *trap* not a *get*, so we can’t do that, and we’re not using SNMPv3 (which has security and encryption built in). So this is all we really need to set.

PowerCLI has no equivalent that I’m immediately aware of. However, since vSphere v4.1, the functionality of “esxcli” at the command prompt has been available via PowerCLI with the “Get-EsxCli” commandlet. However, it’s not really easy to wrap your head around at first. The best way to describe it is once you start a Get-EsxCli session, you continue to execute commands against it until you exit out. Think RSH or Remote PowerShell or PSexec in similarity.

So here we have the script at hand. I’ll break it down after:

===== Set-ESXISNMP.ps1 =====

$esxlist = get-vmhost

$communities = “nw_public”

$enable = $true

$port = 161

foreach($item in $esxlist){

Connect-VIServer -Server $item -User root -Password “<rootpassword>”

$esxcli = Get-EsxCli -VMhost $item

$esxcli.system.snmp.set($null,”$communities”,”$enable”)

$esxcli.system.snmp.get()

}

===== Set-ESXISNMP.ps1 =====

# Get the list of VMhosts in vCenter. This assumes you’re already done “Connect-VIServer NW-VC1 etc”. I should have done better here. Anyway…… this sticks it into an array called $esxlist.

$esxlist = get-vmhost

# next we define some variables, straightforward enough.

$communities = “nw_public”

$enable = $true

$port = 161

# Now we start a “foreach” loop, executing against each $item in the $esxlist

foreach($item in $esxlist){

# the first command we’ll run is connect-viserver –server $item (servername) with user and password added.

# user/password can be exported to saved credentials, but this is just as easy for now.

Connect-VIServer -Server $item -User root -Password “<rootpassword>”

# This is the way the world sets up a Get-EsxCli session, so I did it the same way.

$esxcli = Get-EsxCli -VMhost $item

# we’re going to use “.” Between the same options used at the SSH command line to move through the command

# in the () we’re going to put values for Field1, Field2, Field3. How do we know the fields? I’ll get to that…..

$esxcli.system.snmp.set($null,”$communities”,”$enable”)

# Then we do a quick GET of the same thing. Here, you’ll see the fields.

$esxcli.system.snmp.get()

}

Output looks like: (here you will see how I knew what the fields were)

Where “$port” is now in position8, as spaced out by the $null. $null means you are not editing that particular spot. I’m not sure if you could do it with a command line you can at the actual command line where you do “-port=161” or “-communities=”nw_public”, but this way works for now.

This is my first really heavy usage into Get-EsxCli, but I’m sure I can do a LOT more with this now!

As those close to me will know, I’ve recently jumped aboard the #90DaysToMCSA challenge with a few fellows from work, and have been diligently studying for the 70-410 exam and learning what has been updated from 2008 R2 to 2012.

I stumbled across a little nugget today that I wanted to touch on, as I found it interesting. As we all know, the C:\WINDOWS\WINSxS folder is always huge and cumbersome. This is the Windows Side By Side folder, where it basically keeps track of DLL hell, and when you remove or add features, or software, it helps find the right one. Helpful, but why the HECK is it always 6-8-10GB or more? Of course we know that you can use DISM options to remove Service Pack uninstallation files once applied, but often we’re installing from 2008R2 SP1 or 2012 media, and this has nothing to need to uninstall. Windows 2012, however, has a new feature. Not only can you use PowerShell to Remove-WindowsFeature, but you can also remove the installation media from disk.

See, it seems that also hidden in this C:\WINDOWS\WINSxS folder is a copy of some of the Windows installation media. This is under the guise of being able to add/remove features without being prompted for installation media, such as the original DVD, ISO, or a share on the network. I can understand why one would want this, but personally, I’d rather have my space back.

First, let’s take a look at the folder size on a server I had kicking around with no real features on it, just used for testing and poking at:

6.07GB down to 4.26GB – or about a 30% reduction. That’s not too bad. I’m sure many of my readers (are there many? Okay, all 4 of you) are thinking “But it’s just 1.75GB, and my SAN’s DeDupe features will handle this, but how much does it matter?”. You’re right, SAN level DeDupe will certainly handle this – but why make it? First, having files on disk is just going to skew your stats. Sure, you DeDupe 70% maybe, but if YOU let there be 20% common data, aren’t you being just a little dishonest about the success rate?

Then, let’s think about ALL the scenarios where SAN DeDupe doesn’t help:

You don’t HAVE SAN level DeDupe.

You’re doing this on SSD’s, and your boot disk is small.

You’re doing this with Windows 2012 VM’s on a physical host, and not using Linked-Clone or similar functionality.

You’re doing ANY kind of off-SAN backup. Agent based backups inside the guest (sigh, really? It’s the 21st century….) would be the worst, but even virtualization aware backups would still read the data, even if they DeDupe and/or Compress it away. It might do it in an inline or post process, but why make it do the work?

You’re going to COPY the VM to another system – clones are going to need to copy these blocks.

You have scheduled antivirus scans, and they scan every file on the system, regardless of if it installation media or otherwise.

Recently we came across an issue with our Exchange 2010 environment related to ActiveSync and Apple iOS devices prior to firmware v6.1.2. As such we needed a way to not only get a report of users with device relationships by version/device, but also a means to setup a block for those devices if needed. It turns out that Exchange has a built in process for this by way of the ActiveSync Policies and their state can be either “Granted”, “Denied” or “Quarantined”. In the case of a Quarantine, the user will get a message on their phone and will no longer be able to access the system. However, upon remedying their issue, they will automatically be “Granted” by nature of the new OS/firmware now no longer matching the Quarantine policy search. This works exceptionally well for us, and I will document the steps I’ve used over the last few days to make this all work.

This should be relatively self-explanatory. We’re getting ActiveSyncDevices where the DeviceOS column/field is anything containing *iOS*, and then outputting only the UserDisplayName,DeviceType,DeviceOS,WhenChanged fields, and then exporting it to a CSV file. This CSV file can then be sorted and filtered as desired.

2) As we only had iOS v6.x devices, we needed to put in place Quarantine policies. We could not, however, simply do “*iOS 6*” or “iOS 6.1*” as this would also match the approved v6.1.2 version. Also, while it MAY be possible to Quarantine “*iOS*” and then Grant “*iOS 6.1.2*”, this would result in v6.1.2 being the ONLY approved version and when v6.1.3, or v6.2 or v7.0 comes out, new polices would need to be put in place. By creating only policies that match to Quarantine exiting v6.0, v6.1.0, v6.1.1 devices, we miss that issue:

This will show the UserDisplayName, their DeviceUserAgent (useful for determining the type of device) and what DeviceOS they were running. It is worth noting that following the update from a user, and the removal from Quarantine, a re-run of the above command will not show the user as removed, they simply no longer are Quarantined, and do not show up in the list. I confirmed this with my own device, as I upgraded from iOS 6.0.2 to iOS 6.1.2.

4) There also exists the ability to set the ActiveSyncOrganizationSettings to allow for an “administrator e-mail” account(s). This lets us put in e-mail address(es) that can get an instant notification of when a device gets quarantined or blocked. This way, we know as soon as the user knows. While it is unlikely we would do so, we could even proactively contact the user after seeing the alert, to ask if they need assistance.

5) Finally, in the report from Step 1, it should be noted that users/mailboxes/devices that have not been properly/fully removed will still show up. For example, even if Bob Smith’s account is disabled, that mailbox and devices will show up. Equally, I noted that my iPhone 4 was still showing as I never did anything to remove the device. But more confusing is that my iPhone 5 (of which I only have one of) showed up twice – once for iOS 6.0.2 and once for iOS 6.1.2.

I did attempt to purge my iOS 6.1.2 device to test what would happen, and upon my phone’s next sync, it emptied my mail folders, then refreshed, redownloaded all my mail, and current calendar appointments. When I checked to ensure that my sync folders were still accurate, all of my settings were intact. No interaction on my part was needed to reconnect, I was not prompted for credentials or settings, etc. As such, it seems that any device that is considered old, out of date or suspect, is fair game to delete and if it is in fact still active, it will simply recreate the relationship.

The last largely outstanding task is to find a way to *customize* the Quarantine message. Each policy/filter should be able to have its own, and according to documentation, should be reachable via the ECP (eg: /ECP”>https://mail.<domain.name>/ECP) but I was having no luck getting it to do more than show “loading”. Another day, perhaps…….