Tuesday, September 6, 2016

Introduction

Welcome to the first in a series a Device
Guard blog posts. This post is going to cover some introductory concepts about
Device Guard and it will detail the relatively aggressive strategy that I used to configure it on my
Surface Pro 4 tablet running a fresh install of Windows 10 Enterprise
Anniversary Update (1607). The goal of this introductory post is to start
getting you comfortable with Device Guard and experimenting with it yourselves.
In subsequent posts, I will begin to describe various bypasses and will
describe methods to effectively mitigate against each bypass. The ultimate goal
of this series of posts is to educate readers about the strengths and current
weaknesses of what I consider to be an essential technology in preventing a
massive class of malware infections in a post-compromise scenario (i.e. exploit
mitigation is another subject altogether).

Device Guard Basics

Device Guard is a powerful set of hardware and software
security features available in Windows 10 Enterprise and Server 2016 (including
Nano Server with caveats that I won’t explain in this post) that aim to block
the loading of drivers, user-mode binaries (including DLLs), MSIs, and scripts (PowerShell and Windows Script Host - vbs, js, wsf, wsc) that are not explicitly authorized per policy. In other words, it’s a
whitelisting solution. The idea, in theory, being a means to prevent arbitrary
unsigned code execution (excluding remote exploits). Right off the bat, you may
already be asking, “why not just use AppLocker and why is Microsoft recreating
the wheel?” I certainly had those questions and I will attempt to address them
later in the post.

Device Guard can be broken down into two primary components:

1 - Code integrity (CI)

The code integrity component of Device Guard enforces both
kernel mode code integrity (KMCI) and user mode code integrity (UMCI). The
rules enforced by KMCI and UMCI are dictated by a code integrity policy - a
configurable list of whitelist rules that can apply to drivers, user-mode
binaries, MSIs, and scripts. Now, technically, PowerShell scripts
can still execute, but unless the script or module is explicitly allowed via the
code integrity policy, it will be forced to execute in constrained language mode, which prevents a user from calling Add-Type, instantiating .NET objects,
and invoking .NET methods, effectively precluding PowerShell from being used to
gain any form of arbitrary unsigned code execution. Additionally, WSH scripts will still execute if they don't comply with the deployed code integrity policy, but they will fail to instantiate any COM objects which is reasonable considering unsigned PowerShell will still execute but in a very limited fashion. As of this writing, the script-based protections of Device Guard are not documented by Microsoft.

So with a code integrity policy, for example, if I wanted my system to
only load drivers or user-mode code signed by
Microsoft, such rules would be stated in my policy. Code
integrity policies are created using the cmdlets present in the ConfigCI
PowerShell module. CI policies are configured as a plaintext XML document then converted to a binary-encoded XML format when they are deployed. For additional protections, CI policies can also be signed with a valid code-signing certificate.

2 - Virtualization-based Security (VBS)

Virtualization-based Security is comprised of several
hypervisor and modern hardware-based security features are used to protect
the enforcement of a code integrity policy, Credential Guard, and shielded VMs.
While it is not mandatory to have hardware that supports VBS features, without
it, the effectiveness of Device Guard will be severely hampered. Without
delving into too much detail, VBS will improve the enforcement of Device Guard
by attempting to prevent disabling code integrity enforcement even as an
elevated user who doesn’t have physical access to the target. It can also
prevent DMA-based attacks and also restrict any kernel code from creating
executable memory that isn’t explicitly conformant to the code integrity
policy. The following resources elaborate on VBS:

You can download
the fully documented code I used to generate my code integrity policy. You
can also download
the finalized code integrity policy that the code below generated for my
personal Surface Pro 4. Now, absolutely do not just deploy that to your system.
I’m only providing it as a reference for comparison to the code integrity
policy that you create for your system. Do not complain that it might be overly
permissive (because I know it is in some respects) and please do not ask why
this policy doesn’t work on your system. You also probably wouldn't want to trust code signed by my personal code-signing certificate. ;)

In the Ignite talk linked to above, Scott and Jeffrey
describe creating a code integrity policy for a golden system by scanning the
computer for all binaries present on it and allowing any driver, script, MSI, application, or DLL to execute based on the certificate used to sign those
binaries/scripts. While this is in my opinion, a relatively simple way to establish an
initial policy, in practice, I consider this approach to be overly permissive.
When I used this methodology on my fresh install of Windows 10 Enterprise
Anniversary Update with Chrome installed, the code integrity policy generated
consisted of what would be considered normal certificates mixed in with several
test signing certificates. Personally, I don’t want to grant anything
permission to run that was signed with a test certificate. Notable certificates
present in the generated
policy were the following:

Microsoft Windows Phone Production PCA 2012

MSIT Test CodeSign CA 6

OEMTest OS Root CA

WDKTestCert wdclab,130885612892544312

Upon finding such certificate oddities, I decided to tackle
development of a code integrity policy another way – create an empty policy (i.e.
deny everything), configure Device Guard in audit mode, and then craft my
policy based on what was loaded and denied in the CodeIntegrity event log.

So now let’s dive into how I configured my Surface Pro 4.
For starters, I only wanted signed Microsoft code to execute (with a couple
third party hardware driver exceptions). Is this a realistic configuration? It
depends but probably not. You’re probably going to want non-Microsoft code to
run as well. That’s fine. We can configure that later but my personal goal is
to only allow Microsoft code to run since everyone using Device Guard will need
to do that at a minimum. I will then have a pristine, locked down system which
I can then use to research ways of gaining unsigned code execution with signed
Microsoft binaries. Now, just to be clear, if you want your system be able to
boot and apply updates, you’ll obviously need to allow code signed by Microsoft
to run. So to establish my “golden system,” I did the following:

Performed a fresh install of Windows 10 Enterprise Anniversary Update.

Ensured that it was fully updated via Windows Update.

In the empty, template policy, I have the
following policy rules enabled:

Signing your code integrity policy makes it
so that deployed policies cannot be removed (assuming they are locked in UEFI using VBS protections) and that they can only be updated
using approved code signing certificates as specified in the policy.

2 - Audit Mode (during policy configuration/testing
phases)

I want to simulate denying execution of
everything on the system that attempts to load. After I perform normal
computing tasks on my computer for a while, I will then develop a new code
integrity policy based upon the certificates used to sign everything that would
have been denied in the Microsoft-Windows-CodeIntegrity/Operational and Microsoft-Windows-AppLocker
(it is not documented that Device Guard pulls from the AppLocker log) logs.

If I somehow misconfigure my policy, deploy it, and my Surface no longer boots, I’ll need a fallback option to recover. This option would allow you to reboot and hold down F8 to access a recovery prompt where I could delete the deployed code integrity policy if I had to. Note: you might be thinking that this would be an obvious Device Guard bypass for someone with physical access. Well, if your policy is not in audit mode and it is required to be signed, you can delete the deployed code integrity policy from disk but it will return unharmed after a reboot. Configuring Bitlocker would prevent an attacker with physical access from viewing and deleting files from disk though via the recovery prompt.

4 - UMCI

We want Device Guard to not only apply to drivers but to user-mode binaries, MSIs, and scripts as well.

5 - WHQL

Only load driver that are Windows Hardware Quality Labs (WHQL) signed. This is supposed to be a mandate for all new Windows 10-compatible drivers so we’ll want to make sure we enforce this.

6 - EV Signers

We want to only load drivers that are not only WHQL signed but also signed with an extended validation certificate. This is supposed to be a requirement for all drivers in Windows 10 Anniversary update. Unfortunately, as we will later discover, this is not the case; not even for all Microsoft drivers (specifically, my Surface Pro 4-specific hardware drivers).

Several others policy rules will be described in
subsequent steps. For details on all the available, configurable policy rule
options, read the official
documentation.

What will follow will be the code and rationale I used to
develop my personal code integrity policy. This is a good time to mention that
there is never going to be a one size fits all solution for code integrity
policy development. I am choosing a relatively locked down, semi-unrealistic policy that will
most likely form a minimal basis for pretty much any other code integrity
policy out there.

Configuration Phase #1 - Deny-all audit policy deployment

In this configuration phase, I’m going to create an empty,
template policy placed in audit mode that will simulate denying execution of
every driver, user-mode binary, MSI, and script. After running my system
for a few days and getting a good baseline for the programs I’m going to
execute (excluding third party binaries since I only want MS binaries to run),
I can generate a new policy based on what would have been denied execution in
the event log.

There is no standard method of generating an empty policy so
what I did was call New-CIPolicy and have it generate a policy from a
completely empty directory.

It is worth noting at this point that I will be deploying
all subsequent policies directly to
%SystemRoot%\System32\CodeIntegrity\SIPolicy.p7b. You can, however configure
via Group Policy an alternate file path where CI policies should be pulled from
and I believe you have to set this location via Group Policy if you’re using a
signed policy file (at least from my experimentation). This procedure is documented
here.
You had damn well better make sure that any user doesn’t have write access to
the directory where the policy file is contained if an alternate path is
specified with Group Policy.

What follows is the code I used to generate and deploy the
initial deny-all audit policy. I created a C:\DGPolicyFiles directory to contain
all my policy related files. You can use any directory you want though.

Hopefully, you’ve run your system for a while and
established a good baseline of all the drivers, user-mode binaries (including
DLLs) and scripts that are necessary for you to do your job. If
that’s the case, then you are ready to build generate the next code integrity
policy based solely on what was reported as denied in the event log.

When generating this new code integrity policy, I will
specify the PcaCertificate file rule level which is probably the best file rule
level for this round of CI policy generation as it is the highest in the code
signing cert signer chain and it has a longer validity time frame than a leaf
certificate (i.e. lowest in the signing chain). You could use more restrictive file rules (e.g. LeafCertificate, Hash, FilePublisher, etc.) but you would be weighing updatability with increased security. For example, you should be careful when whitelisting third party PCA certificates as a malicious actor would just need to be issued a code signing certificate from that third party vendor as a means of bypassing your policy. Also, consider a scenario where a vulnerable older version of a signed Microsoft binary was used to gain code execution. If this is a concern, consider using a file rule like FilePublisher or WHQLFilePublisher for WHQL-signed drivers.

Now, when we call New-CIPolicy to generate the policy based on the audit log, you
may notice a lot of warning messages claiming that it is unable to locate a
bunch of drivers on disk. This apperas to be an unfortunate path parsing bug that will become a
problem that we will address in the next configuration phase.

Driver path parsing bug

# Hopefully, you've
spent a few days using your system for its intended purpose and didn't

# install any software
that would compromise the "gold image" that you're aiming for.

# Now we're going to
craft a CI policy based on what would have been denied from loading.

# Obviously, these are
the kinds of applications, scripts, and drivers that will need to

# execute in order for
your system to work as intended.

# The staging directory
I'm using for my Device Guard setup

$PolicyDirectory='C:\DGPolicyFiles'

# Path to the CI policy
that will be generated based on the entries present

In this phase, we’ve rebooted and noticed that there are a
bunch of drivers that wouldn’t have loaded if we actually enforced the policy.
This is due to the driver path parsing issue I described in the last section.
Until this bug is fixed, I believe there are two realistic methods of handling
this:

Manually copy the paths of the drivers from the event log with a PowerShell script and copy the drivers to a dedicated directory and generate a new policy based on the drivers in that directory and then merge that policy with the policy we generated in phase #2. I personally had some serious issues with this strategy in practice.

Generate a policy by scanning %SystemRoot%\System32\drivers and then merge that policy with the policy we generated in phase #2. For this blog post, that’s what we will be doing out of simplicity. The only reason I hesitate to use this strategy is that I don’t want to be overly permissive necessarily and whitelist certificates for drivers I don’t use that might be issued by a non-Microsoft public certification authority.

Additionally, one of the side effects of this bug is that
the generated policy from phase #2 only has rules for user-mode code and not
drivers. We obviously need driver rules.

# My goal in this phase
is to see what remaining CodeItegrity log entries

# exist and to try to
rectify them while still in audit mode before placing

# code integrity into
enforcement mode.

# For me, I had about 30
event log entries that indicated the following:

Alright, we’ve rebooted and the CodeIntegrity log no longer
presents the entries for drivers that would not have been loaded. Now we’re
going to simply remove audit mode from the policy, redeploy, reboot, and cross
our fingers that we have a working system upon reboot.

# This is the point
where I feel comfortable enforcing my policy. The CodeIntegrity log

# is now only populated
with a few anomalies - e.g. primarily entries related to NGEN

So it turns out that I was a little overambitious in forcing
EV signer enforcement on my Surface tablet as pretty much all of my Surface
hardware drivers didn't load. This is kind of a shame considering I would
expect MS hardware drivers to be held to the highest standards imposed by MS.
So I'm going to remove EV signer enforcement and while I'm at it, I'm going to
enforce blocking of flight-signed drivers. These are drivers signed by an MS
test certificate used in Windows Insider Preview builds. So obviously, you
won't want to be running WIP builds of Windows if you're enforcing this.

FYI, I was fortunate enough for the system to boot to
discover that EV signature enforcement was the issue.

# Reboot the computer
and the modified, enforced policy will be in place.

# In retrospect, it
would have been smart to have enabled "Boot Audit on Failure"

# with Set-RuleOption as
it would have placed device guard into audit mode in order to allow

# boot drivers to boot
that would have otherwise been blocked by policy.

Configuration Phase #6 - Monitoring and continued hardening

At this point we have a decent starting point and I'll leave
it up to you as to how you'd like to proceed in terms of CI policy
configuration and deployment.

Me personally, I performed the following:

Used Add-SignerRule to add an Update and User signer rule with my personal code signing certificate. This grants me permission to sign my policy and execute user-mode binaries and scripts signed by me. I need to sign some of my PowerShell code that I use often since it is incompatible in constrained language mode. Signed scripts authorized by CI policy execute in full language mode. Obviously, I personally need to sign my own code sparingly. For example, it would be dumb for me to sign Invoke-Shellcode since that would explicitly circumvent user-mode code integrity.

Remove "Unsigned System Integrity Policy" from the configuration. This forces me to sign the policy. It also prevents modification and removal of a deployed policy and it can only be updated by signing an updated policy.

I removed the "Boot Menu Protection" option from the CI policy. This is a potential vulnerability to an attacker with physical access.

I also enabled virtualization-based security via group policy to achieve the hardware supported Device Guard enforcement/improvements.

What follows is the code I used to allow my code signing cert to sign the policy and sign user-mode binaries. Obviously, this is specific to my personal code-signing certificate.

# I don't plan on using
my code signing cert to sign drivers so I won't allow that right now.

# Note: I'm performing
these steps on an isolated system that contains my imported code signing

# certificate. I don't
have my code signing cert on the system that I'm protecting with

# Now, once I deploy
this policy, I will only be able to make updates to the policy by

# signing an updated
policy with the same signing certificate.

Virtualization-based Security Enforcement

My Surface Pro 4 has the hardware to support these features
so I would be silly not to employ them. This is easy enough to do in Group
Policy. After configuring these settings, reboot and validate that all Device
Guard features are actually set. The easiest way to do this in my opinion is to
use the System Information application.

Enabling Virtualization Based Security Features

Confirmation of Device Guard enforcement

Conclusion

If you’ve made it this far, congratulations! Considering
there’s no push-button solution to configuring Device Guard according to your
requirements, it can take a lot of experimentation and practice. That said, I
don’t think there should ever be a push-button solution to the development of a
strong whitelisting policy catered to your specific environment. It takes a lot of work just like how competently
defending your enterprise should take a lot of work versus just throwing money
at "turnkey solutions".

Examples of blocked applications and scripts

Now at this point, you may be asking the following questions
(I know I did):

How much of a pain will it be to update the policy to permit new applications? Well, this would in essence require a reference machine in which you can place it into audit mode during a test period of the new software installation. You would then need to generate a new policy based on the audit logs and hope that all loaded binaries are signed. If not, you’d have to fall back to file hash rules which would force you to update the policy again as soon as a new update comes out. This process is complicated by installer applications whereas configuring portable binaries should be much easier since the footprint is much smaller.

What if there’s a signed Microsoft binary that permits unsigned code execution? Oh these certainly exist and I will cover these in future blog posts along with realistic code integrity policy deny rule mitigations.

What if a certificate I whitelist is revoked? I honestly don’t think Device Guard currently covers this scenario.

What are the ways in which an admin (local or remote) might be able to modify or disable Device Guard? I will attempt to enumerate some of these possibilities in future blog posts.

What is the fate of AppLocker? That will need to be left to Microsoft to answer that question.

I personally have many more questions but this blog post may not be the appropriate forum to air all possible grievances. I have been in direct contact with the Device Guard team at Microsoft and they have been very receptive to my feedback.

Finally, despite the existence of bypasses, in many cases
code integrity policies can be supplemented to mitigate many known bypasses. In
the end though, Device Guard will significantly raise the cost to an attacker
and block most forms of malware that don't specifically take Device Guard bypasses
into consideration. I commend Microsoft for putting some serious thought and
engineering into Device Guard and I sincerely hope that they will continue to improve it, document it more thoroughly, and evangelize it. Now, I may be being overly optimistic, but I would hope that they
would consider any vulnerabilities to the Device Guard implementation and
possibly even unsigned code execution from signed Microsoft binaries to be a
security boundary. But hey, a kid can dream, right?

I hope you enjoyed this post! Look forward to more Device
Guard posts (primarily with an offensive twist) coming up!

8 comments:

This may sound completely stupid, but isn't an entitlement service (like 10Duke, SafeNet etc.) able to do this (at an enterprise level, at least), considering that access could be granted or prevented based on an near-infinite number of rules? Or did I misunderstood what Device Guard and entitlement services do in general?

I'm not familiar with those solutions. Device Guard is a pretty effective, built-in whitelisting solution. Obviously, there's other solutions out there. Device Guard is newer and I wanted to bring attention to it.

I was working on a security checklist for Windows 10 and became disappointed because there are so many differences between the different versions (Home, Professional, Enterprise, Student). Microsoft needs to get to one version like MAC. Device Guard may be a feature a home user wouldn't use because of the complexity but it doesn't make sense to not have one version from a security perspective. It can make your head hurt trying to figure out what you can do to secure your home computer or small business computer regardless of which version you are running....anyway...thanks for breaking it down barney style :).

Is it possible to allow everything except for a few things you want to block? Just Device Guard blacklisting? White listing is to hard/confusing right now, but want to block bypass stuff and enforce constrained language mode.

I don't know if that's possible as that is a scenario I've never attempted. I've seen RevokeSIPolicy.p7b lying around so there may be potential there but that's not documented. My theory is that any "approved" rules in a RevokeSIPolicy.p7b will block.

If you're wanting to emulate just blocking, scan your system and approve everything at the PCACertificate level and then create your specific deny rules.

But if at the end of the day, you're just wanting to block things, then just set some deny ACLs on what you want to block. My personal opinion is that CLM enforcement doesn't buy you much if you're not whitelisting other executable file types.

"Unable to generate rules for all scanned files at the requested level. A list of files not covered by the current policy can be found at C:\Users\\AppData\Local\Temp\tmp47F3.tmp. If it is safe to not include these files, no action needs to be taken, otherwise a more complete policy may be created using the -fallback switch"

I agree that AWL is not a worthwhile investment if you're not using for its intended purpose - whitelisting.

It is totally normal to see that error. You'll get that when signable code is not signed. A common thing you'll see in the tmp file are a bunch of mofs. Those don't actually need to be signed in most cases (outside of DSC scenarios, I believe). Some code just isn't signed or signed properly though despite being included in the OS.