Archive for the ‘Group Policy’ Category

I’m sometimes asked what the best practice is surrounding the Default Domain Policy and Default Domain Controllers Policy. Microsoft has some good guidance on this topic, but it’s not always clearly and consistently stated. Here’s a quick Q&A that might help.

Q. Is it ok to make changes to the DDP and DDCP GPOs, or should I leave them alone and create new policies?

A. The best practice recommendation from Microsoft is as follows:

·To accommodate APIs from previous versions of the operating system that make changes directly to default GPOs, changes to the following security policy settings must be made directly in the Default Domain Policy GPO or in the Default Domain Controllers Policy GPO:

So, that’s it! If you want to apply other settings at the domain root level or to the Domain Controllers OU then you should create new GPOs and link them to the appropriate scope of management. The ordering of the GPOs shouldn’t really matter as you should have no overlapping settings. As a general rule of thumb, however, I would recommend assigning any new GPOs a higher precedence in case someone starts using the default GPOs for settings that are not on the “approved” list above. That way the new GPOs will win in any conflict.

Another reason to limit the settings in the default GPOs is to allow them to be re-created with minimal re-work in scenarios where they have gone missing or are corrupt and you don’t have a good backup. The method by which you can re-create the GPOs is using a tool called DCGPOFIX.EXE (https://technet.microsoft.com/en-us/library/hh875588.aspx). Bear in mind that this tool is a last resort following a major issue or disaster and you should really ensure you have good GPO backups, as per this article:

If you are in a disaster recovery scenario and you do not have any backed up versions of the Default Domain Policy or the Default Domain Controller Policy, you may consider using the Dcgpofix tool. If you use the Dcgpofix tool, Microsoft recommends that as soon as you run it, you review the security settings in these GPOs and manually adjust the security settings to suit your requirements. A fix is not scheduled to be released because Microsoft recommends you use GPMC to back up and restore all GPOs in your environment. The Dcgpofix tool is a disaster-recovery tool that will restore your environment to a functional state only. It is best not to use it as a replacement for a backup strategy using GPMC. It is best to use the Dcgpofix tool only when a GPO back up for the Default Domain Policy and Default Domain Controller Policy does not exist.

Source: https://support.microsoft.com/en-us/kb/833783

Q. We have disabled our DDP and DDCP GPOs and replaced them with new GPOs. Is that OK?

A. No, that’s not ok. The GPOs have a fixed GUID and can be targeted directly using these by the “legacy APIs” mentioned above.

One well known application that directly modifies the Default Domain Controllers Policy is Microsoft Exchange. The installer adds the Exchange Servers group to the “Manage Auditing and Security Log” User Right (also referred to as SACL right). So, if you disable or unlink the GPO this right (and potentially others like it) it will go missing and will cause problems for Exchange.

Q. Is it OK to rename the DDP and DDCP GPOs?

A. If you feel you must do this I don’t believe it will have any impact, other than it might confuse people when they look for them. I’ve seen some customers rename the GPOs to align them with their in-house naming convention. As mentioned above, these GPOs are targeted using their well-known GUIDs, which is why the rename shouldn’t cause an issue.

You can find the renamed GPOs quite easily using the Group Policy cmdlets, e.g.

# Find the Default Domain Policy

Get-GPO -Guid 31b2f340-016d-11d2-945f-00c04fb984f9

# Find the Default Domain Controllers Policy

Get-GPO -Guid 6ac1786c-016f-11d2-945f-00c04fb984f9

Conclusion

Use the default GPOs for the approved specific purposes only. If you have other settings you need for the same scope of management, create new GPOs and link them with higher precedence than the default GPOs. Under no circumstances should you disable or unlink the GPOs. If you rename the default GPOs there should be no impact, but your mileage may vary.

Despite Active Directory having been around for more than 10 years, I still find new implementations proceeding without directory service access auditing enabled. For me, auditing of who does what, where and when in your directory is crucial information. I can’t fully fathom why Microsoft doesn’t have it enabled with some sensible defaults out of the box, but maybe that’s just me. If you download and install Microsoft’s Security Compliance Manager 3.0, the Domain Controller baseline contains good auditing defaults. Anyway, here is a quick “how to” guide for enabling basic directory service access auditing in Windows Server 2012 R2 Active Directory.

Getting auditing going is done in two key steps.

Configuring Group Policy with the appropriate auditing settings

Configuring the System Access Control List (SACL) at the appropriate level(s) in the directory

Ok, let’s get started with the first item (Group Policy). For this, you will need a GPO linked to the Domain Controllers OU. You can either use the Default Domain Controllers Policy or create a new one specifically for auditing. My preference is to leave the DDCP at its defaults and create a new GPO with the settings.

Before the days of granular auditing settings you would configure your directory service access auditing under:

These settings belong to the dark days of pre-Windows Server 2008 when only the 9 auditing categories were available. You don’t want to use this setting. Instead you want to use the more granular auditing sub-categories introduced first in Windows Server 2008.

The first thing you need to do is enforce the use of the sub-categories in case someone unwittingly turns on the legacy auditing mentioned above. To do this, enable the following setting:

Computer Configuration->Policies->Windows Settings->Security Settings->Local Policies->Security Options->Audit: Force audit policy subcategory settings….[it goes on a bit, but you get the picture]

Now we need to enable auditing for the specific sub-categories we are interested in. To do this go to:

… and enable Audit Directory Service Access. I would also enable Audit Directory Service Changes as that can provide you with complementary audit events, including before and after information for changed attribute values (really very useful). Success and Failure? Perhaps, but unless you have firm requirements to capture failure events then just choosing Success can save the reduce the number of events generated.

Right, remember to link your new GPO to the Domain Controllers OU and that’s it for the GPO side of things. You might be thinking that’s all you need to do, but unless you’ve done part 2 you won’t see the required events generated.

Now you might be interested in just certain parts of your directory, but if you don’t have an obvious place to begin, consider turning on audit records from the top of your domain partition. To do this, open the Active Directory Administrative Centre (dsac). Yes, yes, you could use dsa.msc or adsiedit.msc too.

Right-click the top of the domain tree (“contoso” in my case) and bring up the properties. Under Extensions you will see the Security tab. From there select Advanced and then choose the Auditing tab. If you want to be comprehensive, I would select the Everyone security principal, set Type to Success and Applies to: This object and all descendant objects. For the permissions, again if you want to be comprehensive, set the following:

Write all properties

Delete

Delete subtree

Modify permissions

Modify owner

All validated writes

All extended writes

Create all child objects

Delete all child objects

Once you have applied the changes you’re pretty much ready to go. You should now start to see comprehensive directory access events as well as the before and after values for changed objects.

One thing you might not be aware of is that there was a bug in the RTM version of Windows Server 2012 R2 which meant that the directory service change information events did not get captured. Of course, by now you will have your R2 servers fully patched, but just in case you are still running RTM be aware that you will need the download from KB2887595 for the change information to be available.

Bear in mind that, if you have a lot of auditing going on in your security logs on your DCs, the events are going to be overwritten pretty regularly even if you beef up the size of your security event log. As part of defining your audit strategy you should work out your requirements for storing key events. There are a number of ways to do that, including leveraging some kind of centralised audit collection system (or SIEM), but that’s beyond the scope of this article.

I fell foul of this one the other day and it took a while to figure out what to do about it. Here’s the scenario:

You deploy a new Group Policy Preferences (GPP) setting to create a folder on a workstation and specify the “Apply once and do not reapply” option. Unsurprisingly, this implies the GPP item will apply only once and will not run again!

But what if you have a problem on a specific target computer where the computer thinks it has already applied the GPP setting once? You won’t get the setting to reapply using the traditional gpupdate /force option. Instead you must look in the registry of the affected computer and look for the RunOnce entries. These are located here:

Ok, so armed with this information, how do you go about identifying the GUID-style name with your GPP setting? Don’t be led down the path of thinking the GUID-style name will match the GUID of your GPO – it doesn’t. That kind of make sense as a single GPO can have multiple GPP settings. The answer lies buried within the XML of the GPP setting. To get to the XML, right-click the GPP item within the GPO Editor and select All Tasks -> Display XML. Once you can see the XML look for the line starting with “FilterRunOnce” line. The value shown next to the “ID” is the one that corresponds to the registry entry.

To get the GPP setting to re-apply to the problem workstation, simply delete the relevant string value from the registry and force the policy to re-apply (gpupdate /force).

I’ve recently been looking at Microsoft’s Security Compliance Manager 3.0. SCM allows provides a rich set of server-role-based security baselines for deployment using either GPO or SCCM. This latest version includes baselines for Windows Server 2012.

After deploying the “WS2012 Domain Controller Security Compliance 1.0″ baseline settings via GPO into my lab environment I found RDP sessions to my Windows Server 2012 DCs to be horrendously slow – almost to the point of not being able to do anything.

My on-line searches for the cause revealed nothing official from Microsoft, but I did find some references to one specific setting being the probable cause. The setting is “Use FIPS compliant algorithms for encryption, hashing, and signing” set to Enabled.

You probably know this, but for some reason I only found out about it when someone showed it to me the other day. Anyway, in the interests of sharing….

A really quick way to find the domain password and account lockout policy is to run the following from a CMD prompt:

net accounts

The output looks like this:

One thing you should bear in mind is that the output doesn’t take into account any Fine Grained Password Policies that may apply to your account. In other words, it is simply the output of the password and account lockout policy set at the domain level (usually in the Default Domain Policy) and not the resultant set of policies.

I was recently involved in a task to consolidate an OU structure. Part of this involved moving user objects from one OU to another and re-linking GPOs that were linked to the old OU to the new OU. There were a large number of links and I didn’t fancy adding them manually, so I spent a little time writing a PoSH script to do it. Enjoy! As always, please post a comment if know of a better/different way to do the same thing.

A little while back I posted a Powershell 1.0 script to backup all the GPOs in a domain. Now that Powershell 2.0 is available together with the Group Policy module it is much easier to script Group Policy tasks. The attached script is basically a re-write of my previous script, but now using the Powershell 2.0 cmdlets.

The script is intended for use with the Windows Task Scheduler. For example, by backing up the GPOs to disk on a daily basis you have a simple method for restoring accidentally deleted (or badly modified) GPOs. In my customers’ environments I combine this task with a scheduled full volume snapshot to disk, so that a number of days worth of backups are available.

# Remove any previous backups from the folder
## Note: You will need to move the backups off to tape/disk
## archive daily if you need access to older GPO versions
Remove-Item $BackupPath\* -Recurse -Force

# Find out what domain this computer is in
$mydomain = get-ADDomain -current LocalComputer

# Get all the GPOs in the specified domain
$AllDomGPOs = get-gpo -domain $mydomain.DNSRoot -all

I’m finding there is a huge gulf between playing with Windows Server 2008 in a lab and working with it in a production environment. The biggest difference for me is that I typically use a built-in Administrator account in the lab environment, but work with an account with delegated permissions in production. This means I encounter…er…challenges with User Account Control (UAC) on a fairly regular basis. I have already blogged about some scenarios in which UAC doesn’t error or fail gracefully here, here and here.

Today’s blog entry is all about the following UAC-related Group Policy setting:

Enabled by default, this setting basically forces all users, including Administrators to run as standard users. Any tasks that need to be run as Administrator have to be launched with elevated privilege. It is a setting that is entirely sensible from a security perspective, but can cause frustration and confusion in certain situations. Here’s an example scenario.

Let’s say you are logged into a Windows Sever 2003 (or Vista) machine with an account that is a member of the local Administrators group. By default the Administrators group has Full Control permissions over files and folders on the machine. With the above-mentioned Group Policy enabled, however, you may not be able to, for example, create new text files by right-clicking within Windows Explorer (unless you have rights to do so through either explicit permissions or through membership of other groups). For example, when right clicking in the root of C:\ you are only likely to have the ability to create a new folder by default, as shown below.

No problem, you might think, my account is a member of the local Administrators group so I’ll just fire up Windows Explorer in elevated mode by right-clicking the icon and choosing “Run as Administrator”. Doing this gives all the appearance of running in elevated mode, but in reality does nothing.

So how the heck do you create new text files? Or, for that matter, how do you do all those other things that require elevated privileges that you typically would do from within Windows Explorer in earlier versions of the OS? Well, there may be other methods, but the workaround I found was to open Notepad in elevated mode. Then from within Notepad select File -> Open and this gives you, effectively, an elevated Windows Explorer to work with, as shown below.

Another option would be to open a command window using “Run as Administrator” and create the text file from there. You could then edit and save it using an elevated Notepad session. Again, a rather clumsy workaround for something that you did without thinking in previous versions of the OS.

If nothing else, UAC in Windows Server 2008 and Vista forces you to think outside the box. The old ways in which you used to work with the user interface in earlier versions of the OS may no longer apply. I can be deeply frustrating, but I suspect UAC is here to stay because of the security benefits it delivers. We may as well get used to it.

The other day I had a need to configure scheduled backups of GPOs to file on a Windows Server 2008 Domain Controller. Aha (I thought), I’ve done this before using the BackupAllGPOs.wsf script that is included along with a whole bunch of other handy scripts when you install the Group Policy Management Console (GPMC). After a few minutes of fruitless searching on my Windows Server 2008 DC I realised that although the GPMC was installed (as a feature) the scripts were nowhere to be found. After some Googling I found out that I hadn’t been singled out for victimisation – unlike Windows Server 2003, the scripts just aren’t installed by default in Windows Server 2008 when you enable the GPMC feature. I discovered that you could download the Vista and Windows Server 2008 versions of the scripts here:

It puzzled me that the scripts weren’t included by default. I suspect the Vista and WS2008 versions of the scripts were developed after the products had shipped. Anyway, it made me think that Microsoft maybe wanted me to work with PowerShell and not VBScript. Aha (I thought again), I’ll see if I can find the PowerShell equivalent of the GPMC scripts. After a fair bit of searching I found two options.

Note that I’ve set the $domainName variable to match the domain of the computer from which the script is run. To set the variable to match the domain of the user account under which the script runs change it to (may wrap):

I encounter a fair number of AD implementations as part of my work. Some are good, some bad and some just plain ugly. Here’s a more or less random collection of bad habits that I see quite regularly and some tips on how to avoid and/or kick them.

1. Poor or missing Active Directory monitoring

A number of organisations rely on monitoring Domain Controllers simply as servers. They will monitor things such as CPU, memory, disk utilisation, disk space, etc., but not AD as a service. If something goes bad within AD it might not be picked up by standard server monitoring and alerting. You need to ensure that all AD services are available and healthy. This involves monitoring items such as LDAP and GC port availability and response times, forest synchronisation with an authoritative time source, correctly published DNS SRV records, replication working, SYSVOL healthy, etc.

Implementing a monitoring and alerting solution for your AD service will allow problems to be detected and resolved early, rather than firefighting after the event has happened.

In addition to Microsoft’s Operations Manager Management Pack for AD, there are a number of 3rd party AD monitoring solutions. NetPro’s DirectoryAnalyzer is one of the more comprehensive.

2. Bad delegation

AD offers the ability to implement a granular delegation to suit environments of all sizes. Why is it then that so many organisations end up with little or no delegation and security model? For example, I regularly see environments that have 20 or more accounts in the Domain Admins group. This appears to be because it is seen as too difficult and/or time consuming to configure the appropriate delegation. Once an account is put into a privileged group there appears to be reluctance to remove it “in case it breaks something”. Here are some general tips around delegation.

Separate standard user accounts from administrative accounts. Only allow administrative accounts to be members of privileged groups.

Don’t allow service accounts to be members of the highly privileged groups (e.g. Domain Admins, Schema Admins, Enterprise Admins and built-in Administrators). If the documentation from a vendor says that this membership is required the information is probably wrong. 99% of the time there is a way to delegate without making the account a member of a privileged group.

Apply the principle of least privilege. Give accounts the permissions they need to perform their tasks and no more.

Keep the Schema Admins and Enterprise Admins groups empty. Only populate these groups temporarily when required for a specific task.

Don’t mess with the built-in Administrators group. Leave it alone.

Keep the membership of Domain Admins to a low number (should be no more than 5 trusted individuals, even in large environments).

3. Abuse of the Default Domain Policy

I have seen a number of environments in which the Default Domain Policy and the Default Domain Controllers Policy are heavily used. It is considered a best practice to leave the Default Domain Policy and the Default Domain Controllers Policy untouched and to create new GPOs linked at the Domain and Domain Controllers OU to hold your required settings. The reason for this is that if the Default policies become corrupt and you have no good backups you at least have the option of restoring the defaults using DCGPOFIX.

4. No formal object lifecycle management

I often encounter environments that have little or no formal process for AD object provisioning, re-provisioning and deprovisioning. Amongst other issues, this can lead to a large number of inactive/unused accounts and other objects in the directory. Often the problem is only addressed during a migration or upgrade. The clean-up can be time-consuming, difficult and expensive. Try to associate each newly provisioned object with a human owner (guardian). This will help when making changes in your environment and when you need to remove inactive or unused objects from your directory.

5. No representative staging environment

When making changes to your AD environment, especially schema changes, it is important to have a representative staging environment. This will reduce the overall risk when making the change in your production environment. To make the environment representative, try to make sure at least the following items are the same in both environments:

Schema extensions

Domain Controller service pack and patch levels

Domain and forest functional levels

Number of domains

GC availability

FSMO role distribution

6. No tracking of schema changes

There is nothing built-in to AD that will keep track of what changes have been made to the default schema. Quite often I see environments in which the administrators have no idea what changes have been made to the schema. This can lead to risk and uncertainty when making future changes. If you have a formal change management system in place in your organisation, ensure that schema changes are included and fully documented. Try to maintain copies of the LDIF files that are used for the schema extensions, These are useful for preparing test environments as well as being self-documenting.

Even if you do have a formal change management system in place, consider keeping a separate change log somewhere inside your AD environment (e.g. in SYSVOL). Change management systems may come and go, but your AD infrastructure could be in place for 20 years or more.

7. Missing forest recovery plan

Given the importance of AD to most organisations, I am constantly amazed at how many have no forest recovery plan. When challenged on this, most just point to off-site DCs as an indication of the redundancy they have. But what if you lose forest-wide functionality? Microsoft’s excellent whitepaper on forest recovery lists the following failure/horror scenarios that might require a forest recovery:

None of the domain controllers can replicate with its replication partner.

Changes cannot be made to Active Directory at any domain controller.

New domain controllers cannot be installed in any domain.

All domain controllers have been logically corrupted or physically damaged to a point that business continuity is impossible (for instance, all business applications that depend on Active Directory are non-functional).

A rogue administrator has compromised the Active Directory environment.

An adversary intentionally or an administrator accidentally runs a script that spreads data corruption across the Active Directory forest.

An adversary intentionally or an administrator accidentally extends the Active Directory schema with malicious or conflicting changes.

The whitepaper offers guidelines for building your own forest recovery plan and provides a sample roadmap for the recovery steps involved. Microsoft also recommends that you test your forest recovery at least once per year.

8. Missing subnet registrations

In a number of environments I have seen, AD subnets are registered and associated with their corresponding AD site when the infrastructure is first put in place. Subnets introduced afterwards are not always registered. When subnets are not registered, clients on those subnets will not find an in-site DC and/or GC to use, which can lead to slow responses and unnecessary bandwidth utilisation.

DCs detect connections from clients on unregistered subnets and log the information in the Directory Service event log (Event 5807). The DC also commits the information into the %windir%\debug\netlogon.log. You should regularly monitor your DCs for missing subnets and register them as required.

9. No auditing of Directory Service Access events

If someone deletes an entire OU tree in your domain, you are very likely going to want to know who (or at least which account) was used to perform the deletion. That information will be captured in the security event log of the DC where the change was made, as long as auditing is enabled for the DCs via Group Policy and turned on in the appropriate SACLs of the objects within the directory. Quite often I see that either one or both of these two steps are missing.

I recommend defining and documenting an audit policy for your AD environment and then implementing the policy. Each environment will have different auditing requirements based on the type of organisation that it is, so it is important not to simply accept the default configuration.

10. No event log consolidation

This is linked to the previous entry. There is no point implementing an audit policy if you then subsequently lose the information you need simply because the events have been overwritten in the security event log. Microsoft doesn’t provide a built-in mechanism for consolidation of audit and other event log information. They do however include an Audit Collection System as part of Operations Manager. A number of 3rd parties offer similar solutions that provide a centralised, consolidated view of event information. These systems have the advantage of storing the events more efficiently for much longer periods of time and allowing faster event searches. If the information is important to you (as it is for most organisations) then consider putting the money and resources aside to implement such a system.