There were some internal issues that prevented me from getting this out for a while, but I’m hopeful to post these weekly again now. Note that much of the content that previously would have become a KB article is now ending up on TechNet, so it is possible there will be fewer KB articles being published going forward.

Hi, Ned here again. Today I’m going to talk about handling compressed files in Windows Server 2008 running as a Core server. You know, that thing with no pretty graphical interface and just a big command prompt that stares at you, daring you to try some successful administration.

Core mode is pretty slick – super-small footprint on the disk, great memory utilization, and plenty of useful roles that can be installed. But sheesh – it can be a real pain in the tail if your command-line skills are rusty or if you just haven’t ever had to deal with certain operations sans GUI.

So, quick question: have you tried to zip a file on a Core server yet? It’s sorta hard… because there is no zip.

With Core, there is no Explorer, so there is no file compression functionality. What if you need to compress a file on your Core server in order to copy it over some awfully slow connection and you want to do it all locally? You have two options:

MAKECAB is a tool present in all later SKU’s of Windows. While it cannot create ZIP files, it can create cabinet (CAB) files. It’s an odd little utility as it was designed for developers to package files for installs, but it works well with decent compression.

So let’s take the scenario where you are remoted into your Core server through Terminal Services, WINRM, or (yuck) Telnet. You have a single file you need to compress and copy elsewhere. In your CMD prompt session, type:

MAKECAB <source> <destination> /l <some folder path>

So for example:

MAKECAB c:\somefolder\somebig.fil smaller.cab /l c:\

This will cab up the ‘somebig.fil’ into ‘smaller.cab’ and put that file at the root of the C: drive for easier access.

Compressing a bunch of files with MAKECAB

Things get trickier with multiple files. Remember that MAKECAB was designed for developers, so naturally it’s as complex as possible :-). Using MAKECAB for a bunch of files requires that you use a directives file. Let’s say we have a folder full of files, and we want to zip them all up into one cab:

4. You will now have a folder called ‘Disk1’ that contains ‘smaller.cab’

So a bit tricky, but not too terrible. If you want to get fancy you can add paths in a DDF file, and can even add more directives. If you want a definite cure for insomnia, you can read everything you ever wanted to know about cab files and the Cabinet SDK by going here. This includes further tools like CABARC that add advanced cabinet features. The "maxdisksize" switch there tells makecab.exe to write to the cab file up to 660MB. If there is still data to go, it will then create a "disk2" folder, write to that cab file, and so on.

Decompressing a cab file with EXPAND

Naturally, you’re going to want a way to decompress these files. If you had just copied to a Full Windows 2008 computer, Explorer is perfectly ok at reading CAB files. But let’s say you’ve been copying these CAB files between Core servers and now need to expand the data out. Simple:

EXPAND -R <cab file> -F:* <destination folder that must exist>

So using our smaller.cab from above:

EXPAND –R smaller.cab –F:* c:\

And now our file(s) in the CAB file are decompressed and ready to roll. All without any fancy Explorer shell.

Notes

Final Raymond Chen-style preemptive strike: you’re probably firing up the Comment section below to lambast the developers about leaving zip tools out of Win2008 and how this is a conspiracy to… make people not like us? The real reason there is no command-line zip in Core is because there has never been a command-line zip in Windows. IT Pros just haven’t asked for it; they had their favorite zipping tools of choice, and there was always a GUI method for the home users. In fact, there was no MAKECAB in Core mode for most of its development history, because no MS beta customer had asked for it or for zip functionality after a year of testing! Yours truly raised a (polite, professional) stink and it was added in RC1. Don’t say I never did anything for you, folks.

Rob and Mike here. We're asked, many times, why a user does not authenticate against a local domain controller in the same site when logging on across a forest. We've setup the most common scenario to help explain how domain locator works for user logons across a forest.

Scenario

Let's explain the typical scenario in which we see this problem: The scenario starts with two separate Active Directory forests: contoso.com and litware.com. Each forest has a forest (i.e. kerberos) trust to the other. The contoso.com forest has one Active Directory site name CHARLOTTE. The litware.com forest contains two sites: REDMOND and CONTOSO. Administrators in the litware.com forest created the CONTOSO site and subnet to support logons for litware.com users from terminal servers in the contoso.com forest.

Figure 1 Forest trust configuration

Problem

Users from litware.com logon to a terminal server existing in the contoso.com forest. However, Users experience a slower-than-usual logon, and the LOGONSERVER environment variable shows the name of a remote domain controller in the REDMOND site rather than the domain controller from the CONTOSO site.

However, administrators from the litware.com domain expect users from their domain (logging on contoso.com terminal servers) to authenticate using the domain controller located in the CONTOSO site. The CONTOSO site is the same physical locations as the terminal server. Authentication should occur in the local site and should be fast, right?

DC Locator

It is important to know how a Windows computer selects a domain controller. The computer, during startup, determines the Active Directory site in which it belongs. Windows accomplishes this by examining the subnet of its current network configuration. Then, the computer queries a domain controller that hosts the computer object(using the Windows API DsGetSiteName). The domain controller answers this query with the name of the site associated with the computer's currently configured subnet. This is all done by the NetLogon service, which runs the DC Locator code at boot and periodically rechecks the domain controllers’ location.

IMPORTANT: The computer determines the site name using a domain of which the computer is a member—not the user's domain. This determination occurs during computer startup—not during user logon. Microsoft Support article 939252 (http://support.microsoft.com/default.aspx?scid=kb;EN-US;939252) describes how you can change this behavior

NOTE: Windows writes the site name to the registry during each computer startup. You can view the registry to determine the site to which the computer believes it belongs. You should NOT modify this registry value. The path to this registry value is: HKLM\System\CurrentControlSet\Services\Netlogon\ Parameters\DynamicSiteName\

Domain Controller discovery during user logon

Windows will attempt to use the closest domain controller (on the same subnet) to the local computer for authentication. Windows finds the closest domain controller by using DNS and SRV resource records.

Here’s how it works: Windows first performs a DNS query to find a _ldap SRV record for the computer's current site, but using the domain name selected from the CTRL+ALT+DEL sequence – the domain name of the user. An example looks like:

The DNS server responds indicating a record by that name is not found. Windows then attempts to find any domain controller within the user's domain. Windows accomplishes this by removing the site name from the DNS query. The example similar to the one above:

_ldap._tcp.dc._msdcs.<User Domain>.com: type SRV, class IN

The DNS server's response to the above request includes a resource record for every domain controller in the user's domain. Since the client receives the list of domain controllers in no particular order this result is usually the cause as to why the domain controller locator does not use the closest domain controller for authentication.

NOTE: The Netlogon service of a domain controller is responsible for dynamically registers the _ldap service resource records with its configured DNS server. This registration includes the site specific and domain specific _ldap records.

Our scenario involves a terminal server, which allows multiple users sessions. This is a great way to take network captures of user logon related issues by sing one of the logon sessions to take the network capture. It is always best to clear the DNS and NetBIOS name caches before starting a network capture. This ensures the network capture includes name resolution. You can clear these caches by using IPConfig /FlushDNS and NbtStat –R from a command window (under an elevated command prompt for Server 2008 and Vista).

Here is a snippet of the output of the network capture.

Figure 2 Contoso.com network capture of user logon

The results of the network capture show the domain controller locator attempts to locate a domain controller in the site with the same name as the site of the computer; but in the user's domain (frame 3). The DNS server responds with no such name. This is correct. The Litware.com forest has only two sites: REDMOND and CONTOSO. Frame 4 queries for an SRV record a second time; however, this time the query does not include the site name of the computer (_ ldap._tcp.dc.msdcs.litware.com). The DNS response provides a positive answer to the second query. The answer includes a _ldap record for each domain controller in the litware.com domain (the user's domain).

We've covered the background information and the problem. Now, let's talk about how to fix it. We can accomplish this by using Active Directory Sites and Services to rename the CONTOSO site in the litware.com domain to CHARLOTTE (the name of the site hosting the computer in the contoso.com domain). Active Directory site configuration is stored in the configuration partition of Active Directory. Renaming the site creates a change and you'll want ensure this replication converges-- especially to the litware.com domain controller that is now located in the CHARLOTTE site of the litware.com domain. After Active Directory replication completes, then restart the Netlogon service on the litware.com domain controller in CHARLOTTE site (in the litware.com domain). This registers the service resource records in DNS for the domain controller, including the new site in which it belongs.

Figure 3 Forest trust configuration after site rename

Let's take another network trace of a litware.com domain user logon from the terminal server in the contoso.com domain.

Figure 4 Network capture after site rename in litware.com

The same DNS query from figure 1 appears in figure 2. Frame 3 shows the domain controller locator attempting to find a domain controller service resource record in the CHARLOTTE site of the user's domain, litware.com. However, the difference between figure 1 and figure 2 is the DNS response. Figure 1 returned a negative DNS response because a resource record for the domain controller did not exist in the CHARLOTTE site in the litware.com domain. But, figure 2 shows a positive DNS response (frame 5) for a service resource record for a domain controller in the CHARLOTTE site of the litware.com domain.

Conclusion

After renaming the sites so that they match in both forests, the terminal server in the contoso.com domain successfully located a domain controller covering the CHARLOTTE site for litware.com domain. Ideally, in this scenario you would also want the litware.com domain controller covering for the contoso.com CHARLOTTE site to be on the same subnet as the contoso.com terminal server. This helps expedite litware.com logon requests originating from the terminal servers. Regardless, the configuration provides a way to distinguish a specific domain controller for use with logons that span across forests.

Hello everyone, Scott Goad here, and today I want to take a few minutes and talk about a recent case where we fail to log security settings from the Default Domain Policy. In this case, we had a small environment with 2 domain controllers, one holding all of the FSMO roles, the other a replica domain controller.

The issue was noticed during an internal audit, and the customer noticed that certain security settings were not logged when you run GPRESULT /v, which normally details the resultant set of policies for the particular user and computer. To troubleshoot the issue we began gathering data, and sure enough, some items that were specified in the Default Domain Policy were skipped, and no errors were logged.

We could even demote the Domain Controller down to a member server, and policy would apply and report correctly.

At this point, we looked at the Default Domain Policy, and the settings were there:

After investigating the issue further, we decided to look at the local security policy, and see what was actually getting applied. Below is a piece that was failing, according to GPRESULT:

These screenshots were taken from different servers, after we made the changes in the policy. At this point, we know that we applied the settings, but we are not logging this anywhere. We asked our friends at the Global Escalation Services (GES) Team to take a look. They asked us to move the PDC emulator role to the other DC, and see if the behavior changed. It did! The policy settings in GPRESULT followed the PDC emulator role.

GES reviewed the code and this is a by design behavior. The PDC emulator, member servers and domain-joined workstations apply these settings through group policy. Replica domain controllers for a domain apply these settings by monitoring what is present on the domain naming context head. These settings are replicated via Active Directory replication between the domain controllers of each domain in a forest. These settings are looked at by the domain controllers to help govern some aspects of how they behave. If these settings are changed somehow, the change is replicated and then immediately noticed by the receiving DC and the new behavior takes effect. Here is the full list of attributes:

Ned here. On October 1, 2008 the Professional level “Enterprise Platforms Support” business at Microsoft will transition to a call-back model for all Professional support incidents in the United States and Canada. Below is a description of how to contact Microsoft customer service, as well as other relevant information regarding this change:

Why are we moving to a call-back model?

To minimize the amount of time that customers spend on hold

To enable the support incident to be routed to the correct resource, thus minimizing the incorrect routing of support incidents.

Contacting Microsoft Product Support by Phone

You can submit a support request via phone by calling 800-936-4900. Once your support incident is created, the case will be routed to the appropriate support team and you will receive a call from a support engineer. The response time is based on the severity of the incident. During business hours (Monday – Friday, 6:00 am – 6:00 PM US Pacific Time), the cost is $259. If you need to work with a support engineer outside of these hours, the cost is $515. Please note that you can open a support incident via phone at any time. The after-hours rate only applies if you need to work with a support engineer during non-business hours. This information is also outlined on the Microsoft Help and Support site.

Contacting Microsoft Product Support using Online Support Submissions

As an alternative to opening a support incident via phone, you can use our Online Support Submissions process. As with our Phone support offering, this is a call-back model also, and the response time is based on the incident severity. At this time, Online Support Submissions call-backs are only available during business hours (Monday – Friday, 6:00 am – 6:00 PM US Pacific Time), and the cost of a support incident is $259.

Hello, it’s LaNae again. A major issue I see when customers call in regarding ADAM/AD LDS is around the creation of Service Connection Points and why they are needed. Let’s take a further look into this topic and uncover the mystery of this object.

What are Service Connection Points?

Service Connection Points (SCPs) are objects in Active Directory that hold information about services. Services can publish information about their existence by creating serviceConnectionPoint objects in Active Directory. Client applications use this information to find and connect to instances of the service. ADAM/AD LDS is no exception to this. The serviceConnectionPoint object class is derived from the connectionPoint class. ServiceConnectionPoints in ADAM/AD LDS will contain some key attribute information that is needed for client application discovery. The table below lists the contents of the serviceConnectionPoint object attributes. You can also find this information by using ADSIEDIT.MSC and viewing the properties of the child object of the computer that the ADAM/AD LDS instance is installed.

If ADAM/AD LDS is installed in a domain and the ADAM/AD LDS service account has the Create Child right on the computer object where the serviceConnectionPoint object will be created it will attempt to create a serviceConnectionPoint object in Active Directory. By default global catalogs in Active Directory contain the same information that can be found in the Keywords attribute of a SCP object. Client applications will search the SCP attributes located in the global catalog to find an ADAM/AD LDS instance. Client applications can search for:

The ADAM/AD LDS object identifier

Configuration partition GUID

ADAM/AD LDS instance GUID

ADAM/AD LDS instance name, or any directory partition.

Client applications may perform load balancing by choosing an ADAM/AD LDS instance randomly when a search returns more than one applicable instance.

Discovering ADAM/AD LDS without Service Connection Points

Understand that ADAM/AD LDS do not need SCPs to be published in order to function properly. ADAM/AD LDS can run with or without SCPs. If ADAM/AD LDS is installed in a workgroup environment or if the service account that ADAM/AD LDS is running under does not have the proper permissions to create SCPs then it will not create an SCP. Under these circumstances client applications will use DNS to resolve the host name of a computer that has ADAM/AD LDS installed.

Creating Service Connection Point Objects

Service Connection Point objects can be created automatically or manually. When ADAM/AD LDS is installed on a machine that is part of a domain it will attempt to create a service connection point object. This object will appear as a child object of the computer object where the ADAM/AD LDS instance is installed. If the service account that is used to run ADAM/AD LDS does not have the “Create Child Right on the computer object where the ADAM/AD LDS instance is installed it will fail to create the service connection point object. ADAM/AD LDS will then log an Event ID 2537 in the ADAM/AD LDS event log stating that it could not create the SCP due to insufficient rights.

How are SCP Objects updated?

ADAM/AD LDS checks the SCP object for changes when the instance is started and then it will review the SCP object every hour after that to make sure it is still valid. When the instance starts it will search the Global Catalog for its GUID and then use that to find the distinguished name of the SCP object. The ADAM/AD LDS instance will bind to the distinguished name of the SCP object and updates it if needed. The interval that the ADAM/AD LDS instance reviews the SCP object can be modified by adding the Server information update interval (mins) dword value to the following registry key: HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\instancename\parameters. The value data will be set to the time interval in minutes that is desired.

Managing SCP Objects

You can manage the ADAM/AD LDS by creating an OU that contains the computer objects that host ADAM/AD LDS instances. Doing this will place all SCP objects in the same location and allow for ease of administration.

Modifying the SCP Object Creation Location

Earlier in this blog I stated that the SCP object is created as a child object of the computer object that holds the ADAM/AD LDS instance. This location can be changed by modifying the SCPContainer attribute located on the SCP Publication Service object. This can be done to make administration of SCP objects easier. The following steps are identical for ADAM and AD LDS with a few exceptions: in the AD LDS schema you do not have to make any modification to the msDs-SCPContainer attribute. Also, there is no separate ADSIEDIT snapin for AD LDS.

You will connect to the AD LDS instance using the ADSIEDIT.msc that is used to manage AD DS and connect to the instance on its respective port.

1. You must first open the ADAM Schema (ADAM-adsiedit.msc) and navigate to the msDS-SCPContainer attribute.

2. Right click on the attribute and select properties. Check the box next to “Allow this attribute to be shown in advanced view” and “Index this attribute in the Active Directory” and click “OK”.

3. Launch ADAM ADSIEDIT or AD DS ADIEDIT

4. Connect to the configuration container of the ADAM/AD LDS instance and navigate to “CN=Directory Service,CN=Windows NT, CN=Services,CN=Configuration, CN=GUID of Instance”.

5. In the right pane double click on the CN=SC Publication Service object.

6. Locate the msDS-SPContainer attribute and click edit.

7. Enter the DN of the location where you would like the SCP object to be created.

Note: you will need to give the ADAM/AD LDS service account the create child object right on the object you put in the Value. This can be done by right-clicking on the DN of the SCP object and going to properties. Click on the security tab and add the ADAM/AD LDS service account to the “Create Child Object” right.

Once you have completed these steps all ADAM/AD LDS instances will create the SCP in the specified location.

Summary

I hope this blog post has given you a better understanding of what Service Connection Point objects are. You should hopefully now know why we use service connection point objects and how to administer them.

Hi, Michael here. The following issue is one that I have seen come up from time to time and can be a challenge for IT administrators who are trying to use the built in Version 2 Domain Controller Authentication template in their environment. The concern may be seen when folks used a version 1 certificate in the past but the newer one (version 2) seems to give some unexpected results.

So what’s the problem? Well, if you have a third party application which uses LDAP over SSL to connect to the domain controller it may not work initially using the new version 2 Domain controller Authentication certificate.

So let’s go over the issue in detail. A 3rd party application was making LDAP over SSL connections to the Domain Controllers as part of what it does intentionally. This was working when the domain controller had a certificate based on the “old style” version 1 Domain Controller template. An Enterprise Certification Authority had issued the certificate. However, the “Domain Controller” certificates have been superseded by certificates based on the “Domain Controller Authentication” certificates which can happen for several reasons that we won’t go into great detail on in this blog post today. The end result which is seen is that the 3rd party application now fails.

What is the apparent problem? By default, the “Domain Controller Authentication” certificate has a blank subject field and the Subject Alternate Name (SAN) field is marked critical on the “Domain Controller Authentication” certificate. Simply put, some applications cannot use a certificate if the SAN field being marked critical.

Why is this field important? Some applications may have difficulty using the certificate if the SAN field is marked critical and the subject field is blank because of how these fields are checked when deciding whether to use a certificate.

So, how do you resolve this little quandary?

a.) You could change you application to be in compliance with RFC 3280 (see excerpts from RFC 3280 below)

b.) You could configure the domain controller to use a certificate based on the version 1 Domain Controller template.

c.) In the Domain Controller authentication certificate template, you can change the subject field from “none” to “common”. You can then issue a new Domain Controller Authentication certificate to the Domain Controller. In this certificate, the subject field contains the DNS name of the machine and the SAN field is not marked critical on the domain controller authentication certificate. Then delete the old “Domain Controller Authentication” certificate. Finally, reboot the machine.

Why reboot? As a general rule, if a Domain Controller already has a certificate for LDAP over SSL, it will not pick up the new one until the next reboot.

End result: The 3rd party application can successfully connect to this Domain Controller.

So why did this whole problem occur?

This change to have a “blank Subject field and a Critical SAN field” was made to conform to RFC 3280 (Internet X.509 Public Key Infrastructure April 2002). Here’s an excerpt from that RFC on why the change was made:

Subject Common name use is ambiguous.

Sometimes a DNS name is stored in this RDN, and sometimes other types of names are stored there.

The Subject Common Name is also limited to 64 characters in CCITT conforming implementations, so it can easily be too short for a full DNS name.

The Subject Alt Name extension encoding tags fields to identify their use.

This RFC was published in April 2002, and the V2 DC template change was implemented in Windows 2003.

The bottom line here is that client applications using LDAP over SSL need to be updated to conform to current standards (mentioned in the afore mentioned RFC)

Our development team tells us this: Currently, we do not expect any immediate negative impact to reverting back to the old name representation – as long as all of the full DNS names are shorter than 64 characters, and as long as all of the characters in the full DNS names conform to the IA5 string character set.

There is certainly a potential danger of some applications no longer accepting DNS names in the Subject Common Name, but we have not yet seen that issue. For more detail you can read the RFC for yourself here (insert link on here http://www.ietf.org/rfc/rfc3280.txt).

The subject alternative names extension allows additional identities to be bound to the subject of the certificate. Defined options include an Internet electronic mail address, a DNS name, an IP address, and a uniform resource identifier (URI). Other options exist, including completely local definitions. Multiple name forms, and multiple instances of each name form, MAY be included. Whenever such identities are to be bound into a certificate, the subject alternative name (or issuer alternative name) extension MUST be used; however, a DNS name MAY be represented in the subject field using the domainComponent attribute as described in section 4.1.2.4.

Because the subject alternative name is considered to be definitively bound to the public key, all parts of the subject alternative name MUST be verified by the CA.

Further, if the only subject identity included in the certificate is an alternative name form (e.g., an electronic mail address), then the subject distinguished name MUST be empty (an empty sequence), and the subjectAltName extension MUST be present. If the subject field contains an empty sequence, the subjectAltName extension MUST be marked critical.

... When the subjectAltName extension contains a domain name system label, the domain name MUST be stored in the dNSName (an IA5String). The name MUST be in the "preferred name syntax," as specified by RFC 1034 [RFC 1034]. Note that while upper and lower case letters are allowed in domain names, no signifigance is attached to the case. In addition, while the string " " is a legal domain name, subjectAltName extensions with a dNSName of " " MUST NOT be used. Finally, the use of the DNS representation for Internet mail addresses (wpolk.nist.gov instead of wpolk@nist.gov) MUST NOT be used; such identities are to be encoded as rfc822Name.

Note: work is currently underway to specify domain names in international character sets. Such names will likely not be accommodated by IA5String. Once this work is complete, this profile will be revisited and the appropriate functionality will be added.

Finally, here are a few additional links which can be helpful in planning and understanding this issue.

Hi, Ned here again. You may remember Mike Stephens writing about importing and exporting WMI filters back in May. A common follow up question we got from that blog post was: “Hey cool. So, uh, what are WMI filters again?”

Group Policy WMI filters were introduced with Windows XP, and are supported in Windows Server 2003, Windows Vista, and Windows Server 2008. They are not supported in Windows 2000, so if you have an all-2000 environment you’re out of luck (10 years is a long time to go without upgrading :-P).

For those still with us…

You can use WMI filters to add a decision on when to apply a given group policy. This can be very useful when users or computers are located in a relatively flat structure instead of specific OU’s, for example. Filters can also help when you need to apply certain policies based on server roles, operating system version, network configuration, or other criteria. Windows evaluates these filters in the following order of overall Group Policy Processing:

Policies in hierarchy are located.

WMI Filters are checked.

Security settings are checked.

Finally, once everything has ‘passed’, a policy is applied.

So we find all the policies that exist in the user/computer’s Local, Site, Domain, and OU hierarchy. Then we determine if the WMI filter evaluates as TRUE. Then we verify that the user/computer has Read and Apply Group permissions for the GPO. This means that WMI filters are still less efficient than hierarchical linking, but can definitely use filters to make decisions in a non-hierarchical Active Directory design.

You configure WMI filters using the WMI Filters node in GPMC.MSC.

Figure 1 – GPMC WMI Filters Node

Then you can create, delete or edit a filter.

Figure 2 – WMI Filter Editor

Then you can link the WMI filter to any GPO you like (or more than one GPO), like below:

Figure 3 – GPMC Filter Dropdown

So in this case, I created a filter (you will see more on this below) that allows a GPO to apply to operating systems earlier than Windows Vista. I linked the WMI filter to a GPO that is applied to Windows Server 2008 computers – so the GPO shouldn’t apply. If I force Group Policy processing using GPUPDATE /FORCE then run GPRESULT /R, I see:

Figure 4 – GPRESULT output

Slick!

WMI filters use a language called WQL, which will be very familiar to anyone that has ever written a SQL query. The nice thing about learning WMI queries is that it forces you to learn more about the extremely powerful WMI system as a whole and its massive repository of data within it. WMI works within a construct of Namespaces and Classes. 99% of every WQL query will operate in the CIMV2 namespace, like all of the examples below.

So let’s look at some syntax examples:

Only for certain operating systems

It is common to want Group Policy objects to apply to a computer using a specific operating system or service pack installed. Here are some examples that cover a few bases:

The above WQL query returns true only if the operating system is Windows XP Service Pack 2.

SELECT * FROM Win32_OperatingSystem WHERE Version LIKE “6.0.%” AND ProductType <> “1”

The above WQL query returns true only if the computer is running Windows Server 2008 regardless of service pack. Why so complex, you ask? Remember that Windows Server2008 and Vista SP1 share the same codebase, so they actually have the same exact version. Choosing a product type not equal to 1 (which is Workstation) returns only servers or domain controllers running Windows Server 2008.

Only on Windows Server 2008 Core servers

What if you have a GPO that you want to apply only to servers running Windows Server 2008 Core installations? Here is a sample query (wrapped for readability, this should be done as a single line in the filter dialog):

If you want GPOs to apply only to computers NOT running Windows Server 2008 Core (and you can probably think of some reasons to do that), then you would change all the equal signs (=) in the above query to signs above to angled brackets (<>). (See http://msdn2.microsoft.com/en-us/library/ms724358.aspx for details and the non-CORE values.)

Only on a certain day of the week

Yes this is possible! Yes, customers have asked how to do this! No, I have no idea why! Ok, kidding about that last one, but it sure seems like an odd request at first. It turns out that some companies like to do things like set a specific message of the day for their legal notice. Or have a separate screensaver running every day of the week for their users. Different strokes for different folks, I suppose.

To do this, your WQL queries (one filter per GPO that you wanted to set, remember) would be:

Select DayOfWeek from Win32_LocalTime where DayOfWeek = 1

Select DayOfWeek from Win32_LocalTime where DayOfWeek = 2

Select DayOfWeek from Win32_LocalTime where DayOfWeek = 3

You get the idea. One is Monday, two is Tuesday, etc.

Wrapping it up

Hopefully you’ve found some new things to think about regarding WMI filters and Group Policy. A closing note: not all WMI filters are created equal. Not everything in WMI is as optimized as we’d like it to be, and some WMI queries are not as performant as we’d like. Avoid loose wildcard queries when possible as they will run slower (for example, Select * from Win32_LocalTime where DayOfWeek = 5 will run slightly slower than the samples provided above). And above all, always test before deploying to production, using the slowest hardware you can find so that you get a good idea about baseline performance.

Got a filter question or a good sample to share? Hit the comments section below.