Archive

I recently ran into a situation where a client has a group per server for Administrators, Remote Desktop Users, and hopefully, Service Accounts. This may or may not be the best way of dealing with this, but it does solve a need by moving user access to AD vs configuration on local servers. It’s a little easier to centralize and manage by administrators that may have access to AD but not the servers themselves (eg: HelpDesk users). The problem, as indicated below, is that setting the rights for the service account/groups has been getting done manually to the systems as they are built or needed. This has resulted in inconsistencies, as one might expect. So I found a way to standardize and bring it all “back up to code”, as it were.

PROBLEM:

You have a need to set a user or group to have “Log on as a Service” or “Log on as a Batch Job” rights. This can be done via the Local Security Policy (secpol.msc) or via GPO. However, there are two obvious issues with this:

1) Using SECPOL.MSC means you’re editing the local security policy. While this may be the only way to accomplish this, it is decentralized and uncertain to maintain.

2) Using the GPO method only allows you to set a particular set of user(s) or group(s) to the affected machines

However, if you have a need to set a 1:1 relationship with a dynamic name to the system, GPO’s and the Local Security Policy leave something to be desired. There is no functionality within the GPO to say “Apply GRP-%SERVERNAME%-SVC” to have this rights, and have it apply as needed – at least for the Logon As a Service right. Using other methods you can allocate to existing groups with existing rights, but you cannot either dynamically specify a group in THIS GPO location, affect the Local Security Policy, or set the rights for this local group.

REQUIREMENT:

Have each server/system have a group such as GRP-SERVER01-SVC group identifying service accounts. This would be a company policy scenario, and would ensure that administration and auditing of local group memberships was ONLY done via Active Directory, and could be done via delegated rights by users who may not have rights to login to the server.

Have the group apply only to the named server. Eg: GRP-SERVER01-SVC should have rights on SERVER01, but not SERVER02 or SERVER03

If possible, one should also be able to add to the local group a GRP-ALLSERVERS-SVC for a service account that might be globally allowed. Eg: DOMAIN\svcAutomation, DOMAIN\svcBackup, etc.

Centrally manageable

Automatic, dynamic, updates and standardizes over time.

OPTIONAL – also do similar for the pre-existing local groups of “Administrators” and “Remote Desktop Users” for a corresponding GRP-%COMPUTERNAME%-ADM and GRP-%COMPUTERNAME%-RDP as appropriate.

This MUST be run with the –u / -p switch to specify the user to use with the –h “highest privileges”. The –C must also be used to copy the batch file to the local system so it can run.

You will see entries in the log similar to:

Granting SeServiceLogonRight to Service Accounts on \\NW-ADCS1... successful
Granting SeServiceLogonRight to Service Accounts on \\NW-DC1... successful
Granting SeServiceLogonRight to Service Accounts on \\NW-DC2... successful

5) We now have a local group called “Service Accounts” and this local group has the rights “Logon as a Service”.

We can verify this by running “SECPOL.MSC” on one of the servers and checking the rights assignments:

Sure enough, the local “Service Accounts” group is listed.

6) We can now handle the remainder of this via normal GPO’s for Restricted Groups, using DYNAMIC naming.

Open the GPO editor and create a new GPO and name it something obvious such as “LOCAL_RESTRICTED_GROUPS”, and then edit it.

We will choose UPDATE for an action, as the group should already exist based on our previous work.

The group name will be “SERVICE ACCOUNTS”.

Click ADD to add members

This is where the magic comes in. If you press the “…” beside the NAME, you can search for the group/user based on a traditional ADUC type search. But we don’t want that. Instead, place your cursor in the NAME field. Press the F3 key:

We get a list of VARIABLES! We want to use ComputerName so that we can reference the group as GRP-%COMPUTERNAME%-SVC and each computer will get its own group. Click SELECT.

Note the variable shows %ComputerName% as expected. Modify that as needed to have the GRP- and -SVC prefix and suffix.

Click OK to close this window.

I’ve chosen to also add an -ADM and –RDP group for Administrators and Remote Desktop Users as this is another use case.

Close and save the GPO

9) Link your GPO appropriately:

Here I have a GROUPS-TEST OU and I have placed my NW-VEEAM01 server in this OU, along with the 3 associated groups. This will limit impact during testing.

10) On the system in question, check the current group memberships:

11) On the system in question, run a “gpupdate /force”

12) Again on the system in question, confirm the updated group membership:

There you have it. The ADM/RDP groups were easy as they not only pre-exist, but are pre-defined. The complication really was the “Service Accounts” group, which both does not pre-exist, and has no special rights by default or built in direct way of adding them via the command line.

The recommendation would be to run the SET_LOGONASSERVICE.BAT as part of the server build process/scripts, or have it pre-done in your deployment image/WIM/VM Template. Equally, a PSEXEC run against all servers in the domain could force set this group on a periodic basis to ensure the rights existed. Additional error checking could be built in to check if the command was successful, check if the domain group exists, create it if required, etc.

Some post comments:

Remember that the local account has a SID. If it is deleted, and recreated with the same name, that won’t be enough as the Log on as a Service right will be assigned to the old SID

As the batch file creates the account with a description and we didn’t tell the GPO to do so, it’ll create a new group if required, but with no description. This is your identifier that something is off, and hopefully that helps you troubleshoot.

Next up, I’m going to use the Description field of the Patching Group in AD, along with the schedule detail from Get-PatchDate.ps1 to build another object we can pull from later as we start to do things like:

Send an e-mail with the details of the patching to the server owners for review

Let those owners know when the window(s) will occur

Use the same information when we start telling SCCM to create Deployment Packages with Maintenance and Deadline windows using the schedule information.

First, the code:

#
# Created By: Avram Woroch
# Purpose:
# To obtain Patching Schedule information, which is contained in the Description field
# of the Patch Group object in AD. We are assuming a group name of:
# SRV-S0-PATCHING-PROD1A, SRV-S0-PATCHING-PROD2B, etc.
# also we are assuming a Description field that contains 3 fields, delimited by ^ in the format of:
# <Whatever>^<PatchWindowStart>^<PatchWindowEnd>
# We don't store the patch day of month here, as we may need to do one-off patching
# We are then left with an $Object called $objPatchScheduleList which contains:
# $PatchGroupName $PatchingDate $WindowStart $WindowEnd
# Usage:
# Get-PatchingScheduleInfo.ps1
# MODIFY THIS VARIABLE - the -like "name" shoudl be the common name for the SET of patch groups
$PatchGroups =get-ADGroup -filter {Name -like "SRV-S0-Patching*"}
# Create a custom object that contains the columns that we want to export
$objPatchScheduleList = @()
Function Add-ToObject{ $Script:objPatchScheduleList += New-Object PSObject -Property @{ PatchingGroup = $args[0]; PatchingDate = $args[1]; WindowStart = $args[2]; WindowEnd = $args[3]; } }
$PatchingDate = ""
# Loop through each of the groups
ForEach ($Group in $PatchGroups)
{
# Search computers and get their Name and Description
$PatchGroup = Get-ADGroup -Properties description $Group | Select Name,Description
# Store the resulting server name
$PatchGroupName = $PatchGroup.Name
# Split the group name to get the unique portion we commonly refer to it as - eg: PROD1A
$PatchGroupTemp = $PatchGroupName -split "-"
$PatchGroupSet = $PatchGroupTemp[3]
$PatchGroupSet = $PatchGroupSet.Substring(0,$PatchGroupSet.Length-1)
$PatchingDateTemp = '$PatchDay'+$PatchGroupSet
$PatchingDate = $ExecutionContext.InvokeCommand.ExpandString($PatchingDateTemp)
# Create a $Desc array and use -split to use the delimiter to break apart the variables
if ($PatchGroup.Description) {$Desc = $PatchGroup.Description -split "\^"}
# WindowStart is Field1 after -split
$WindowStart = $Desc[1]
# WindowEnd is Field2 after -split
$WindowEnd = $Desc[2]
# Send those dtails out to the object definied earlier
Add-ToObject $PatchGroupName $PatchingDate $WindowStart $WindowEnd
}
$objPatchScheduleList

This isn’t a lot different from the Get-PatchDetails, and the same sort of logic is used. Build an object that we can reference later using existing data, and split apart some fields to make them more readily usable later on.

As you can see I’ve populated this with dummy information, but I can revise later.

Some things I think of now as I look at it, but want to stop messing with it because it works:

I probably should store the “Short Patch Name” – eg: “PROD3B” in a column, might make the rest of the work later on a few less steps

I know I’m going to have situations where the WindowEnd is the next day in the AM – eg: 22:30-04:30. I don’t yet know how I’m going to factor for that. Probably do some logic that says “if $WindowEnd < $WindowStart, $WindowEndDate=$PatchingDate+1”. We’ll see. I may find outt that WindowEnd is better suited as WindowDuration with the # of hours. But I wanted to make it easy to have in the Group Description Field

I have this feeling I might want to use actual AD Schema, but I’m not sure if it’s as maintainable as just telling someone to “Edit the Description”. It also means that the Description becomes pretty dependent, and someone modifying it without knowing that it used for this, might break it. In that event, one might run this script nightly and export the object to a CSV, so if someone ever DID mess up the Descriptions, you could VERY easily refer back to what they were at the time. There’s many other ways you could deal with that though…

Yesterday, I posted some details about MS14-068 and MS14-066 (https://vnetwise.wordpress.com/2014/11/19/cve-2014-6324-ms14-068-and-you/) and of course today, have had to do some investigating into a few sites that have a variety of patching systems. Some are using SCCM, some WSUS, some have policies and procedures, some don’t. But I noticed a potential ‘perfect storm’(?) of situations that could cause some of them grief – and it was more than just one.

Let me draw you a picture of what is a pretty common environment:

WSUS exists for updates, because that’s “the responsible thing to do”

WSUS was likely configured some time ago, and no one likes it because it’s not sexy or fancy, so it doesn’t get any love. Thus, it is probably running on Windows 2008 or 2008 R2.

Someone at some point *did* ensure that WSUS was upgraded or installed with WSUS 3.0 SP2

This all sounds pretty good, on the face of it. Now let’s introduce some real world into this environment….

Procedures state that you will install updates that are previous month or older – so you’re staying 30 days out, which is reasonable – let someone else go on Day0.

Those same procedures state that you will look at the list, and select the Critical and Security Updates from the last month, and approve them.

Nothing is stated for what to do about the current month’s patches – they are left as “unapproved” – but also not “declined”

Alright, so still pretty “common” and at face value, not that bad. A year or two goes by, and now you introduce Windows 2012 and Windows 2012 R2 to the mix. This itself is not a problem, but it’s where you start to see the cracks. Without even having to look at the environment, I know already the things I want to be looking for….

Because the current month’s updates are not being “Declined”, they’re showing up in the list as “missing”. If you have 10 updates, and 8 are approved and 2 are not, you will only ever possibly show 90% patched. The remaining two WSUS/WU knows are “available”, but “I don’t have them. You want to decline those so they only show up as 8 updates and 100% success. Otherwise, how do you know at a glance if the missing update is the approved one that SHOULD be there, or one from this month? Your reporting is bad. See: https://vnetwise.wordpress.com/2014/03/24/howto-tweaking-wsus-so-it-only-reports-on-updates-you-care-about/

Because the process counts on someone approving “last months” updates and not “all previous updates”, there’s almost certainly going to be some weird “gap” where there is a period of a few months that isn’t approved and isn’t installed for some reason. But the “assumption” is that they’re all healthy. Because the previous point doesn’t “decline” any updates, the reports for completion are untrustworthy – and/or never reviewed anyways.

Next, Windows 2012+ has been introduced. There’s a KB that is required to be installed on the WSUS server *and* rebuild of the WSUS package on the client to ensure compatibility. See MS KB2734608 (http://support.microsoft.com/kb/2734608). Because this is an “Update” and neither Critical nor Security, it is not applied to either the WSUS server or the clients.

In order for the Windows 2012/2012R2 WU/WSUS behavior to actually be changed, you need GPO’s that Windows 2012/2012R2 understands. In order for that to be true, you need 2012+ ADMX files in your GPO environment. Preferably in your GPO “Central Store” (again – https://vnetwise.wordpress.com/2014/03/20/howto-dealing-with-windows-2012-and-2012-r2-windows-update-behavior-and-the-3-day-delay/). But because Windows 2012 and 2012 R2 were likely “added to the domain” with no testing, studying, certification, or reading, this wasn’t done. Equally, even if it WAS done, most likely someone is still editing the GPO’s on the 2008/2008R2 based Domain Controller – which wipes out the ADMX based changes and replaces them with ADM files and the subset of options that they understand. You’ll never know this happened though, and even if you jump up and down and tell people not to do it, they will.

No one is ever doing a WSUS cleanup, so Expired, Superceded, etc updates are still present. Which isn’t helping anyone.

So to make that detail a little shorter:

Choosing Critical and Security Updates only is causing you to miss out on *required* updates. Stop being “fancy” – just select them all please.

Because you’re choosing “date ranges” of updates, you’re missing some from time to time. Stop being “fancy” – select “from TODAY-## to END”

If you introduce a new OS to your environment, you need to ensure your AD and GPO’s support them.

On top of the Updates and Update Rollups above that cause those issues, let’s take a quick look at some of the other things that are NOT considered Critical or Security Updates:

That’s just ONE Update Rollup. None of those look like ANYTHING I’d want to happen to my servers. </Sarcasm> So why WOULDN’T I want to install those? Yes, there may be features you’re not using. Perhaps you don’t use DeDuplication or DFS-R. Won’t it be fun later when you install those Roles/Features, and WSUS scans that server, and says “all good, nothing to update” for you? Tons of fun!

So, long story short – please stop being fancy. You’re introducing complexity and gaps into your environment, and actually making things harder. This means more work for you and your staff and co-workers. That likely don’t have enough time and resources as it is.

Today Microsoft released update MS14-068 to address CVE-2014-6324, a Windows Kerberos implementation elevation of privilege vulnerability that is being exploited in-the-wild in limited, targeted attacks. The goal of this blog post is to provide additional information about the vulnerability, update priority, and detection guidance for defenders. Microsoft recommends customers apply this update to their domain controllersas quickly as possible.

And:

The exploit found in-the-wild targeted a vulnerable code path in domain controllers running on Windows Server 2008R2 and below. Microsoft has determined that domain controllers running 2012 and above are vulnerable to a related attack, but it would be significantly more difficult to exploit. Non-domain controllers running all versions of Windows are receiving a “defense in depth” update but are not vulnerable to this issue.

Now, don’t take that to mean my stance is “Meh, don’t patch!”. Quite the opposite. As per the article:

Update Priority

Domain controllers running Windows Server 2008R2 and below

Domain controllers running Windows Server 2012 and higher

All other systems running any version of Windows

So get those DC’s patched _now_, and calmly plan to update the remaining servers.

But I’ve heard from a number of colleagues/twitter/posts today that this introduces chaos, makes a busy week worse, etc. Certainly it is critical and important, but I’m not getting the frustration:

It immediately only applies to 2008R2 DC’s and lower. Most Small to Mid size enterprises I know don’t have more than a couple dozen at best, and often many less. So patch them.

You likely don’t have 2012R2 DC’s – for many reasons. Too many legacy systems that don’t like 2012/2012R2 DC’s, you haven’t had time to get around to it, you haven’t tested, you’re afraid of them, whatever.

They’re DC’s, they’re redundant. Just patch the bloody things.

But I think it’s that last part that makes people lose their minds. Folks, if you can’t reboot a DC in your environment, you’ve built a very poor system (or “have” one – maybe you inherited it – it’s still your job to make it better!). Yes, you should minimize the downtime, so do it in a period of lower activity if you can, but if you have to wait for… 2:00AM on a Sunday, there’s a problem with what you’ve built. I can probably even guess what these problems are:

Even though you likely have Windows Server Datacenter and virtualization (Hyper-V or VMware) for unlimited VM’s, someone is probably all freaked out about “server sprawl” – so you have fewer servers that you could have.

Failover/maintenance has never been tested. So you have “redundant systems” and maybe tested the failover, in a CONTROLLED fashion – but never tested the equivalent of a “power cord yank”

Stop doing this.

It doesn’t require a $5000 1U server to run a role any more. Stop building like its 2003. Server Sprawl is only a problem if you have lousy automation and processes for consistency. Managing 53 or 153 servers shouldn’t be significantly different. You SHOULD be able to reboot servers and services at any point in time without concern. If you cannot, then even if you have multiple, you DO realize you have identified a failure point, right?

If your answer is something along the lines of “But we don’t know the impact it will have…” – seriously? Why not? You tested, right? Your monitoring software will alert you of services or functions that fail when a dependent service fails? You might have even built in rules to self-heal or scripts to try “the obvious fix”?

Probably not though. Everyone’s too busy paying 28% “Technical Debt” on the big fancy expensive toys and software they bought that they didn’t get enough people to install completely or got button mashed until it “kinda worked” then the next fire stole the body away. You know that “Cloud” thing everyone’s talking about and how all the CEO/CIO/Directors/Management “want it” but “don’t know what it does”? It’s about automation, scale, and self-healing, with growth and shrinking elasticity. Instead of “wanting it”, it’s time to “build it”.

Or, we can just keep doing like we’ve always done – chasing the next hot thing, and killing symptoms instead of root causes. That’s probably what will happen…

Update 18-11-2014: V2 of the bulletin was released. Details from the update:

Reason for Revision: V2.0 (November 18, 2014): Bulletin revised to announce the reoffering of the 2992611 update to systems running Windows Server 2008 R2 and Windows Server 2012. The reoffering addresses known issues that a small number of customers experienced with the new TLS cipher suites that were included in the original release. Customers running Windows Server 2008 R2 or Windows Server 2012 who installed the 2992611 update prior to the November 18 reoffering should reapply the update. See Microsoft Knowledge Base Article 2992611 for more information

So if you’ve already patched, you’ll need to re-patch.

I wonder if this can be taken to be true:

As of writing, the MSRC and other security assets do not report that there attacks in the wild since the issue was responsibly disclosed to Microsoft. However it is only a matter of time….

Given the issues, and how this is introducing interoperability issues, it may be advisable to give some thought to how fast this update gets rushed into production.

Hope the above information helps, and sorry for my little detour into rant-ville. I feel better now though, if it matters.

In order to set up an isolated Lab network, we need a way to handle the “isolation” part. By doing so, we can allow the VM’s to still have internet access and/or access to the company LAN, but have no direct inbound access to them other than the vSphere console. By doing so, we ensure that the internal LAN for the labs, can be used without conflict with existing LAN’s. For example, DHCP and PXE booting would then be safe to use. To do so, we’ll use a M0n0wall appliance, as this works well on VMware Workstation, vSphere, etc. This example will cover building this for a VMware vSphere environment, vs VMware Workstation – but the concepts carry across.

Information you will require to complete this task:

· User the lab is for – eg: David Lock – we need this for the initials to use

Recently we came across an issue with our Exchange 2010 environment related to ActiveSync and Apple iOS devices prior to firmware v6.1.2. As such we needed a way to not only get a report of users with device relationships by version/device, but also a means to setup a block for those devices if needed. It turns out that Exchange has a built in process for this by way of the ActiveSync Policies and their state can be either “Granted”, “Denied” or “Quarantined”. In the case of a Quarantine, the user will get a message on their phone and will no longer be able to access the system. However, upon remedying their issue, they will automatically be “Granted” by nature of the new OS/firmware now no longer matching the Quarantine policy search. This works exceptionally well for us, and I will document the steps I’ve used over the last few days to make this all work.

This should be relatively self-explanatory. We’re getting ActiveSyncDevices where the DeviceOS column/field is anything containing *iOS*, and then outputting only the UserDisplayName,DeviceType,DeviceOS,WhenChanged fields, and then exporting it to a CSV file. This CSV file can then be sorted and filtered as desired.

2) As we only had iOS v6.x devices, we needed to put in place Quarantine policies. We could not, however, simply do “*iOS 6*” or “iOS 6.1*” as this would also match the approved v6.1.2 version. Also, while it MAY be possible to Quarantine “*iOS*” and then Grant “*iOS 6.1.2*”, this would result in v6.1.2 being the ONLY approved version and when v6.1.3, or v6.2 or v7.0 comes out, new polices would need to be put in place. By creating only policies that match to Quarantine exiting v6.0, v6.1.0, v6.1.1 devices, we miss that issue:

This will show the UserDisplayName, their DeviceUserAgent (useful for determining the type of device) and what DeviceOS they were running. It is worth noting that following the update from a user, and the removal from Quarantine, a re-run of the above command will not show the user as removed, they simply no longer are Quarantined, and do not show up in the list. I confirmed this with my own device, as I upgraded from iOS 6.0.2 to iOS 6.1.2.

4) There also exists the ability to set the ActiveSyncOrganizationSettings to allow for an “administrator e-mail” account(s). This lets us put in e-mail address(es) that can get an instant notification of when a device gets quarantined or blocked. This way, we know as soon as the user knows. While it is unlikely we would do so, we could even proactively contact the user after seeing the alert, to ask if they need assistance.

5) Finally, in the report from Step 1, it should be noted that users/mailboxes/devices that have not been properly/fully removed will still show up. For example, even if Bob Smith’s account is disabled, that mailbox and devices will show up. Equally, I noted that my iPhone 4 was still showing as I never did anything to remove the device. But more confusing is that my iPhone 5 (of which I only have one of) showed up twice – once for iOS 6.0.2 and once for iOS 6.1.2.

I did attempt to purge my iOS 6.1.2 device to test what would happen, and upon my phone’s next sync, it emptied my mail folders, then refreshed, redownloaded all my mail, and current calendar appointments. When I checked to ensure that my sync folders were still accurate, all of my settings were intact. No interaction on my part was needed to reconnect, I was not prompted for credentials or settings, etc. As such, it seems that any device that is considered old, out of date or suspect, is fair game to delete and if it is in fact still active, it will simply recreate the relationship.

The last largely outstanding task is to find a way to *customize* the Quarantine message. Each policy/filter should be able to have its own, and according to documentation, should be reachable via the ECP (eg: /ECP”>https://mail.<domain.name>/ECP) but I was having no luck getting it to do more than show “loading”. Another day, perhaps…….