The ability to remain active on a target system even after reboots is a key component of a long-term successful compromise. Unfortunately, there are a number of ways for a threat actor to persist in Windows across reboots, and it can be very difficult to comprehensively identify these areas without specialized software. This is where Sysinternals’ Autoruns (AR) come into play. Autoruns is a Sysinternals’ tool that has been widely used in the industry to help bring to light the many different areas in Windows used for persistence.

The purpose of this integration is succinctly thus:

To further enhance the host-level capabilities of Security Onion by integrating Sysinternals Autoruns’ logs into the Security Onion ecosystem, and making this data available for OSSEC rulesets as well as ELSA queries.

Without commenting on the political issues, I think it is a well-articulated chapter urging the Information Technology community to move off of its continued fascination with Tools, and onto a more realistic mindset of where those tools fit into the (digital) security landscape. A key quote:

“An analysis of time intervals is key to understanding the interaction between attackers and defenders, but in general the security community does not sufficiently understand or appreciate the nature and consequences of this relationship. A technology-centric worldview obsesses about a static, one-time exchange between attacker and defender. This is not an accurate description of the real world, which is populated, not with mindless code, but with rational and irrational human beings who are both intelligent and adaptive adversaries and who observe their targets, allocate resources, and make dynamic decisions in order to accomplish their goals.” [Emphasis mine]

I have seen this technology-centric worldview go hand-in-hand with a vulnerability-centric mindset – where the focus is on dealing with vulnerabilities, at the expense of an intellligence-driven, threat-centric mindset. When an organization views digital security in this way, it can have the unfortunate side effect of siloing digital security resources outside of the established security apparatus of the organization. This reinforces IT’s continued tech/vulnerability-centric mindset, as they do not see targeted digital security incidents as they really are – coordinated campaigns that must be dealt with at a strategic level.

“The problem with the focus on tools and tactics, and related topics of risk and ROI is that higher-level management and boards do not feel connected to the true defensive posture of their organisation. Because leaders have not been valued parts of the security program development process, they think security is mainly an issue to be solved by technical professionals. Their experience with the IT and security worlds has led them to approach security as an issue of approving budgets to purchase ever-more-costly security software…” [Emphasis mine]

Organizations stuck in this way of thinking must first change their understanding of security to be based on the recognition that all security threats are ultimately created by human threat actors, and that human threat actors will use whatever resources they have available to them, whether physical or digital. This understanding will ultimately break down the silos, and unify the digital security resources with the rest of the organizational security apparatus. Secondly, as Richard states, organizational leadership must take ownership and deal with these issues at a strategic level. Only then will the organization have the ability to start tracking actual campaigns targeting them rather than just hand-waving and stating that they are seeing “millions / billions” of “computer attacks” every year.

-Josh

“…Tech is not the path to security. Security comes from the way that you live your life, not the tools. The tools are simply enablers…” @thegrugq

Jesús Linares / Wazuh have recently released OSSEC decoders for all current (v3.11) Sysmon EventIDs. Up until this point, I had been maintaining primarily just EventID 1 (Process Creation), but now we have the added benefits of parsed logs for the following Sysmon Events:

ID2: A process changed a file creation time

ID3: Network Connections

ID4: Sysmon service state changed

ID5: Process Terminated

ID6: Driver Loaded

ID7: Image Loaded

ID8: CreateRemoteThread

This is a great addition, as we can now start writing rules against thread injection events, unsigned drivers being loaded, etc.

Two different methods could be employed to collect the Sysmon events from the local client. Possible approaches use only OSSEC or a hybrid architecture where OSSEC and Windows Event Collection are utilized together.

OSSEC

One way to collect the Sysmon events from all installed clients would be to use the Host Intrusion Detection System (HIDS) that Security Onion includes, which is OSSEC. This architecture would include installing OSSEC on all servers and workstations, and configuring it through the option to send Sysmon logs to Security Onion. (Windows Eventchannel Example) If this were the only function that OSSEC would be used for, most organizations would be reticent to deploy another client to their workstations and servers, especially when there are other, more efficient options to collect the Sysmon data.

Hybrid

The architecture that the author has used and recommends is that of a hybrid model. This would include installing OSSEC only on servers, as there are typically other types of logs that need collection as well. For workstations, the use of the Windows Event Collector framework is recommended to collect all of the Sysmon logs onto a central Windows system. (Helweg) With the logs all in one location, an OSSEC client can be installed on the collection server, which would process all of the logs and ship them off to the Security Onion sensor. For offsite users, events can still be collected by making the collector server publically available. Refer to the following diagram for what this particular architecture would look like:

Diagram of hybrid collection model

Now that the logs have been collected and shipped to the Security Onion sensor, they must be processed by both OSSEC and ELSA before the data can be used by either of those tools. Because Sysmon is relatively new, the author of this paper was required to write his own parsers for both ELSA and OSSEC to be able to pull out the relevant data contained in Sysmon events.

Security Onion is a NSM platform built on existing tools, maintained primarily by Doug Burks and Scott Runnels. It is based on Ubuntu, and integrates a number of tools for both network and host-level detection and analysis, including: (Burks, 2012)

Snort – Open source network IDS from Sourcefire.

Suricata – Open source network IDS from the Open Information Security

OSSEC – Open source host IDS.

ELSA – Open source centralized log management application.

Sguil – Open source analyst console for NSM practitioners.

Bro – Open source network analysis framework.

Squert – Open source web application used to query and view event data in Sguil.

Snorby – Open source web application console for NSM practitioners.

Security Onion is built such that as these tools integrate and work together, the full range of NSM data and certain types of host data can be collected, viewed, analyzed and escalated efficiently. The host-level data is provided primarily through the use of OSSEC and ELSA. This paper will focus on enriching this capability through the integration of Sysinternal’s Sysmon, so as to augment the detection and response blinded by encrypted traffic as well as gain access to additional host-level indicators.

…..

The type of host data that Sysmon covers is three-fourths of the data types from the Pyramid of Pain – Hash values (of all executables that are running), IP Addresses, Domain Names, and some Network/Host Artifacts. Finding another free, lightweight, and feature-rich tool that has the backing of a team like Sysinternals, is an almost impossible task. These reasons are what make Sysmon a good choice for enriching the host-level capabilities of Security Onion.

How will Sysmon data be integrated into Security Onion? For historical queries and manual hunting, Sysmon data will be accessible in ELSA. For generating alerts based on real-time incoming Sysmon events, OSSEC will be utilized.

As NSM practitioners become blind to the majority of the traffic entering and exiting their networks and require the ability to generate quality indicators in both the network and host space, there needs to be a shift to include more than just network-centric data in detection and response strategies. Hosts on the network can be an extremely rich repository of data that can be extracted and used in detection and response in conjunction with NSM data. In essence, this is applying the same type of NSM mindset to host-level data. In fact, this concept has been coined “Enterprise Security Monitoring” by David Bianco. (Bianco, Enterprise Security Monitoring, 2013) ESM integrates intelligence-driven CND principles. As such, a notable point of ESM is the ability to locate relevant indicators pertinent to where an intrusion might be in relation to the kill chain. Because these indicators span both the network and host, there is a need to be able to have access to both categories of data.

Though many tools can generate both NSM and host data, the confounding issues typically revolve around how to efficiently collect the data and present it in a way that makes it usable for alerting, analysis and decision-making. This is where Security Onion brings it all together.

Unfortunately, it is not just encrypted traffic that harries NSM practitioners – the persistence of advanced adversaries continues unabated. This has given rise to intelligence-driven CND, which is a threat-centric risk management strategy. (Hutchins, Cloppert, & Amin) Simply put, as the defender gathers intelligence about intrusions and the adversary behind them, the defender is able to use this information in future detection cycles against the adversary. Indicators are a key part of this intelligence. From the formative paper, Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains: “By completely understanding an intrusion, and leveraging intelligence on these tools and infrastructure, defenders force an adversary to change every phase of their intrusion in order to successfully achieve their goals in subsequent intrusions. In this way, network defenders use the persistence of adversaries’ intrusions against them to achieve a level of resilience.” (Hutchins, Cloppert, & Amin)

A crucial part of this methodology is the ability to gather quality indicators. Quality indicators are extractable (“Can I find this indicator in my data?”), purposeful (“To what use will I put this indicator?”), and actionable (“If I find this indicator in my data, can I do something with that information?”). (Bianco, Enterprise Security Monitoring, 2013) Without these quality indicators, defenders will not be able to efficiently detect further intrusions by the same adversary. Various forms of indicators have differing values. Consider David Bianco’s Pyramid of Pain:

David Bianco’s Pyramid of Pain

It can be seen that Hash Values and IP Addresses are on the bottom of the pyramid. This indicates that though these types of indicators can be useful, they are very easy for the adversary to cycle through, hence the probability of seeing the same indicator used in multiple campaigns is much lower than tools that the adversary uses (which is much higher on the pyramid). The key point is that as the defender is able to build up their detection strategy around higher quality indicators, this will require the adversary to change their Tools, Tactics, and Procedures (TTPs), which is very costly in terms of time and resources. This does not negate the fact that the lower indicator types are still useful.

Though there are different types of indicators (Atomic, Computed and Behavioral), it is clear that the defender must have indicators that span the gamut of both network and host-level, as an adversary carries out operations in both spaces. (Hutchins, Cloppert, & Amin)

So as to be able to maintain persistence, both targeted and opportunistic threats use certain techniques to attempt to blend into the background of a busy system. One of the primary ways of doing this is by emulating and/or abusing legitimate Windows processes. For instance, malware named svhost.exe instead of svchost.exe, which is a legitimate process. Another example would be the Poweliks class of malware, which hollows out a legitimate process and runs its malicious threads from there. In fact, in the case of Poweliks, there is no binary downloaded to the system itself, as it runs entirely in memory.

Using the host data generated by Sysmon, detection of these techniques can become commonplace. The crux of the idea is that it is well known how critical legitimate Windows processes should be running. Let us take a closer look at this detection strategy. The current iteration of Poweliks hollows a legitimate Windows process, dllhost.exe, to perform its malicious tasks. (Harrell, 2014) When the author ran a copy of Poweliks on a system with Sysmon installed, the following pertinent data was generated:

Typically dllhost.exe’s parent process would be svchost.exe, and at runtime, dllhost.exe would be passed the following parameter: /Processid:{}. As can be seen, the dllhost.exe that is started by Poweliks falls outside the norm, and would have set off some alerts.

Based on this concept, the I wrote more than ten OSSEC rules that cover normal behavior for a number of critical Windows processes. These rules can be found on Github. Keep in mind that the rules were written with the corresponding OSSEC decoder for Sysmon logs, so they may need to be edited if used outside of that particular context. When writing the rules, there were a number of ways to alert on abnormal behavior: Image Location, User Context, Parent Process Image, and finally, how many instances should be running on the system. For simplicity, the ruleset was designed to alert on one abnormal attribute. The most immutable attribute would seem to be the parent image, which is why the ruleset only looks at the parent image for abnormalities. Within this attribute, two abnormalities are checked for. The first is whether the parent process image is known-good. For example, the parent image of svchost.exe should only ever be C:\Windows\System32\services.exe. The second abnormality is that there are a couple processes that should never spawn a child process—lsm.exe and lsass.exe. With this being the case, there are a few rules that look for these particular images as the parent process image and alert if found.

In the eleven years since Richard Bejtlich wrote his seminal book on Network Security Monitoring, practitioners have seen a number of issues in the last few years that have shown some of the limitations of network-centric monitoring. The rise of encrypted-by-default web traffic, which blinds defenders to most NSM data types is one of those issues.

The collection of NSM data is typically through a TAP or SPAN on a strategic chokepoint in the network. If the network data between the client and server is encrypted, a number of types of NSM data will be useless to the analyst—full content, extracted content, and certain types of alerts. With the revelations of the past few years that a number of governments around the world have been intercepting their citizen’s unencrypted communications, there has been significant interest in encrypting most, if not all of the web traffic around the world. In 2014, CloudFlare, which hosts a content delivery network (CDN) and security services for two million websites, enabled free SSL for all of their customers. They stated, “Having cutting-edge encryption may not seem important to a small blog, but it is critical to advancing the encrypted-by-default future of the Internet. Every byte, however seemingly mundane, that flows encrypted across the Internet makes it more difficult for those who wish to intercept, throttle, or censor the web.” (Prince, 2014)

From a recent study, The Cost of the “S” in HTTPS, twenty-five thousand residential ADSL customers saw HTTPS usage in uploads accounting for 80% of traffic compared to 45.7% in 2012. (Naylor, et al.) This trend is expected to continue for the foreseeable future.

This increase of encryption will typically be seen in north – south traffic, not necessarily east – west traffic, which means NSM sensors deployed to monitor internal traffic may not be so readily affected. However, sensors deployed at network egress points will certainly be affected unless some type of mitigations is put into place. These mitigations would include proxying the SSL traffic so that the network data could be read, though this solution is limited in practice due to performance, privacy, and liability concerns.