Managing an effective, sustainable security operations program today is challenging. Threats continue to evolve, resources are constrained across the board, and processes need to be built, tested, and reported on to a wide variety of audiences ranging from management to compliance auditors to security-aware customers.

That’s why at Rapid7, our mission is to help you break down legacy siloes, whether between IT, development, or external audiences, with sustainable processes and connected tools. From a technology perspective, we’ve focused on three major areas:

Modern threat detection and response

Between today’s diverse range of threats and the spectrum of data that comprises your modern corporate network, the base requirements for effective visibility, analytics, and automation have shifted:

1. Visibility: Data collection, correlation, and analysis

The first challenge companies need to address is collecting the right data from the right sources, from endpoints, to cloud services, to infrastructure-as-a-service (IaaS). Then, it’s a matter of making that data easily accessible, searchable, and digestible across infosec, IT, and development teams.

2. Analytics: Meaningful alerts and threat intelligence

Once you have the data, you need to know whether it can help you find anomalies and active threats across users and assets within your ecosystem. Nearly all threat detection solutions today spit out a high quantity of false-positive alerts lacking important context, which makes it both tedious and frustrating to understand what really happened. Security teams need to demand a better approach to help them find and respond to the events that deserved focused investigation.

3. Automation: Investigations and incident response

Once risky or malicious activity is detected, you need to be confident that it’s legitimate and be able to trace everything the adversary did from initial access to your internal network to lateral movement and persistence across other company assets. Once you contain and remediate a threat, your risk of rebreach is minimized and that new threat intelligence is added to your prevention defenses.

How InsightIDR can help

It works by first unifying security data, collecting it from endpoints, networks, and your cloud services, then automatically tying those disparate events to their respective users and assets. From a single real-time map, you can see all ingress to your internal network across VPN, cloud services, and FFF. Along with our wide range of alerts for stolen credentials generated by our user behavior analytics, it makes it easy to spot why there are Office 365 accounts authing in from Nigeria, for instance.

InsightIDR is delivered exclusively as SaaS-SIEM, which means you don’t need to worry about data storage, scaling, or infrastructure. You get full access to your data with a powerful log search and pre-built compliance dashboards, with a configurable retention length to align with your business needs.

Because InsightIDR is focused on holistic detection, unlike other SIEMs, endpoint detection and response is included with our Insight agent. We maintain one agent across the Rapid7 portfolio, so you can also gather asset vulnerability data if you have Rapid7 InsightVM.

If you’re familiar with the MITRE ATT&CK framework, InsightIDR has hundreds of pre-built detections to find dangerous behavior at each stage of the attack chain. This ranges from email accounts exposed in public data breaches to suspicious persistence techniques (e.g., PowerShell spawned from Word), to east-west lateral movement, to privilege escalation and event log deletion.

Whenever InsightIDR surfaces a threat, relevant events are presented in a visual timeline. From there, it can automatically sync with ticketing systems such as Jira or ServiceNow to ensure the right stakeholders are in the loop. From within the investigation interface, you can also take direct action to contain a threat, including disabling an account in Active Directory or Okta, killing a process, or quarantining an asset with the Insight agent.

You can see Hannah walk through these powerful capabilities in our on-demand webcast. The InsightIDR demo portion kicks off at 10:19.

A two-pronged, layered approach to behavior analytics—user behavior analytics and attacker behavior analytics—is what makes InsightIDR particularly unique. It’s already a crucial element of modern detection and response programs for hundreds of companies spanning across the S&P 500 to midmarket companies with underserved security staff. To see how you can leverage automation using InsightIDR, watch the entire webcast on demand here.

See the power of UBA in action with a free trial of InsightIDR.

Q: Sam, what’s the difference between user behavior analytics and a rule-based approach?

A: Let’s start with an analogy: You’re a fisherman out on a mission to specifically catch tuna. You throw out a net, and when you bring it in, the net scooped up a bunch of other fish, too. Either you have to sort through them, or a whole bunch of fish will be harmed.

This can be what it’s like to sort through alerts that come in from security tools with static rule sets. Because they’re designed to look for specific events, they can also catch users or assets that are actually innocent but require you to sort through and sift out.

While rules are easy to write, they’re not the most accurate way to detect real threats today—or an efficient use of your team’s time. This is where user behavior analytics comes in.

Q: How does user behavior analytics work?

A: As soon as UBA is integrated within your environment, it begins a short learning period and quickly starts to understand what is normal versus abnormal in your environment. This learning period helps fine-tune detections and block out the noise. UBA gathers all the users and assets and monitors them to understand what they do, what they talk to, and so on, to better understand if an event is actually an issue or just a normal occurrence.

Then, any good UBA solution will allow you to manually add things to the baseline that weren’t discovered during the training period. For example, you can indicate that while a certain event may seem abnormal, it shouldn’t be flagged because it is actually typical and just happens infrequently. By adding exceptions like these to the behavior engine, you’ll be able to tune out what little noise remains so that whenever UBA does fire off an alert, you know with high confidence that it’s something worth investigating.

A: When you start looking at behaviors, you’re not just looking to detect users and machines in the environment, but also how each of them normally behaves so that you can more accurately detect when anomalies occur. Rules, on the other hand, don’t take into account who the actors are; they can only understand that certain events equal certain alerts. There is no consideration for how many users or machines there are in the environment and whether this particular time that a rule fires is the thousandth time or the first.

UBA allows you to get much more sophisticated with detecting threats and reduces—or altogether eliminates—false positives.

Q: How has user behavior analytics evolved over time?

A: At its inception, UBA was math applied to data, which simply produces a different kind of noise. Today, it’s about understanding what combination of behaviors is likely to represent an attack, and then creating mechanisms for the UBA engine to adapt to each customer’s unique environment.

It’s important to note, though, that a lot of UBA vendors today still simply run analytics on data. Rapid7, on the other hand, truly applies an attacker methodology to our behavior models based on the deep threat intel our teams are able to gather. Our recently published Q3 Threat Report is a great example of what we’re collecting on attacker tactics, techniques, and procedures.

Q: What should a good user behavior analytics solution offer?

A: First, a good UBA solution should be able to build an accurate model of all the entities in your environment—both users and machines to which it can attribute behaviors. Real-time attribution of events to actors is a really hard problem to solve, requiring the collection of a very diverse set of data, which is why many vendors have yet to crack the code. Many traditional SIEM solutions claim they have UBA, but SIEM engines aren’t built to do real-time attribution. This is because users and assets constantly move around in a modern network architecture, leading to an engine that cannot accurately map events to entities.

Rapid7’s InsightIDR solution utilizes a proprietary attribution engine with models that are purpose-built to detect behaviors indicative of true threats, while sorting out users who may be doing unusual tasks but are not actually compromised or performing malicious actions.

Next, the UBA engine should have a group of models that are created to detect attacker behaviors, not just anomalous behaviors. Because your users will do strange things all the time, detections need to be more sophisticated. Attacker behaviors are a specific set of events that attackers trigger as they move across your environment.

Last, UBA needs the ability to tune the behavior analytics engine. Ideally, tuning should happen automatically, but with the option to tweak it manually once it’s fully operational. This allows you to sort out unique events to your organization to make the engine even more accurate.

Q: How has Rapid7 adjusted its approach to maintain industry-leading standards?

A: Rapid7 UBA is one of the few UBA engines that truly applies an attacker methodology to behavior models. Combining our unique expertise in threat detection with behavior analytics, we’ve developed highly accurate UBA models that can catch real attacks in your environment.

Since InsightIDR is a SaaS service, we can actually see how our UBA models and alerts are firing across a diverse set of customers. That continuous feedback loop makes our behavior engine highly tunable and of a higher fidelity.

Q: Are there any widely held beliefs about user behavior analytics that you’d like to debunk?

A: The big one is believing that just math alone can solve the problem. Many vendors think that if they have a multitude of mathematical and machine learning techniques, that alone is enough to detect attackers. While you may occasionally catch one, it can easily lead to drowning in false positives, which makes it hard for teams to know which alerts are real and which are not.

The second is that you can build UBA without a true attribution engine and the ability to track users and assets across a modern network. This is why I am skeptical when a traditional SIEM claims to have UBA; they just don’t have the foundational elements needed to build a functional UBA engine, making the result often simply more noise.

Q: How should organizations see user behavior analytics as part of their broader threat detection strategy?

A: The ability of UBA engines to track unusual behavior across your entire environment can be a principal tool for detecting attacks and a valuable source of information during impact assessments and investigations.

Whether used as a core detection engine or as a way of augmenting other alerts, UBA’s ability to quickly adapt to new attacker techniques in a way that isn’t high-noise is incredibly valuable to security teams both large and small.

Start a free trial of InsightIDR and experience Rapid7’s approach to UBA today.

Hello, InsightIDR blue teamers! Last week, we released Universal Event Formats (UEF), which greatly expands our user behavior analytics (UBA) support for DHCP, antivirus, ingress authentications, and VPN. For UEF to work, all you need to do is transform your previously unsupported log into a UEF and send it for automatic user attribution in InsightIDR.

The basic steps shown in the graphic below highlight how to download and install NXLog, modify the default nxlog.conf file to ingest the raw logs and output in Rapid7 UEF, then simply send and verify successful ingestion into InsightIDR.

I recorded the following video walkthrough that you can watch paired with this blog guide.

1. Collect the logs

The first step is deciding where to install NXLog CE. Two recommended options are:

Install it on the system where the logs are generated (verify OS compatibility).

In the video, my source system is running Windows and the log is in a text format, so it makes sense to install NXLog CE on it. If I were instead using syslog to forward the logs from a device to NXLog CE, it makes more sense to install NXLog CE on the collector.

At this point, you should keep the “NXLog User Guide” and “NXLog Community Edition Reference Manual” handy, which are both available from the NXLog documentation site.

First, we should prep the system that has NXLog CE so that we can change configurations, test, and repeat until everything is working as needed.

I installed NXLog CE to the default location on the same system that has the logs I need to collect, C:\Program Files (x86)\nxlog. This folder contains a subfolder called conf that has the NXLog configuration file, nxlog.conf. Although the NXLog install created a default nxlog.conf, I now need to edit it to manipulate my logs. I start by making a copy of the nxlog.conf as a backup.

Next, I open services.msc so that I can stop and start the NXLog service as needed for my testing. You’ll need to restart the service for each new test, so it’s best to have it handy and open. First, stop the NXLog service—we don’t want it to run just yet.

When NXLog starts, it will create a diagnostic log for itself called nxlog.log. I chose to write the nxlog.log file to C:\Program Files (x86)\nxlog\data, its typical location (this is a setting in nxlog.conf).

We want to tail this file so that it tracks any new entries. This is where NXLog will post any error messages, parsing issues, etc., so keep nxlog.log open. Windows doesn’t have a native tool to tail files, so you can use the free BareTail tool to do this.nxlog.conf can be edited with any text editor, so use your favorite—Notepad will suffice, for example. Onto the next step!

1b. Edit nxlog.conf

NXLog uses nxlog.conf to understand how to read in logs, manipulate them, and forward them to a receiving device. We need to edit nxlog.conf to input and convert logs into UEF, and forward them to the InsightIDR collector.

nxlog.conf has three main sections: global directives, input and output blocks, and a route block. Global directives define what NXLog can do. The input block reads the source logs, while the output block defines how the logs are forwarded and the route block tells NXLog what order to process the input and output blocks in. We’ll need to modify each of these sections to successfully send our logs to InsightIDR.

Based on the information above, a shell for nxlog.conf file looks like this:

The first bit details where NXLog is installed, cache folder, etc. Next are <Extension> blocks, which define global directives. You’ll need a directive for each type of module, but we don’t need to worry about them yet. Finally, you’ll see the <Input>, <Output>, and <Route> blocks.

We’ll start with this shell and add to each section.

1c. Read in the logs

Let’s start by filling in the <Input> section in nxlog.conf. Here, our logs are in a folder on the same system as the NXLog CE installation. The application is writing log files to C:\SomeSoftware\logs, and output as LogFile1.txt, LogFile2.txt, etc. Each log entry is written into the file with one log entry per line.

1e. Routes and other NXLog configurations

We’re at the final section, the Route block. Our flow for the logs is simple: We want them to be read in using <Input>, and then go to both of the <Output> blocks. Here is the Route block:

<Route 1>
Path in => out, resultfile_out
</Route>

Before we test, we need to add directives by defining all the NXLog extensions being used in nxlog.conf. So far, our nxlog.conf defines one extension for syslog, which allows up to output logs as syslog. We will need two extensions in the nxlog.json file: one for syslog, and one for JSON. As our final logs should be in JSON, we should add in a <Extension> for outputting using syslog, xm_json. I am not yet using the xm_json extension, but I know I must as my final logs are required to be in JSON format.

Before we continue, it makes sense to test the configuration. Let’s now start the NXLog service. Watch for errors in C:\Program Files (x86)\nxlog\data\nxlog.log by opening the file with baretail.exe installed earlier.

As new log entries are written into the log files in the C:\SomeSoftware\logs folder, I expect NXLog should collect and write them into a results.csv file. We haven’t done any log conversion, so they should appear in their original form. If you encounter errors in nxlog.log, or don’t get results, fix these issues before continuing.

2. Convert the logs to UEF

Going back to our objective, converting the Raw Log into UEF, there’s a bit of work ahead of us!

First, we should consider how to read in and parse the logs. In our example, the log is in Key Value Pair (KVP) format. Each field of the log entry contains <key1>=<value1>. After consulting the “NXLog CE Reference Manual,” we choose the xm_kvp module to read in the fields. This module requires some specifications for the data format to be defined in the kvp extension, such as what separates the key value pairs, so I take note of this information in my actual logs.

We need to add in an extension for kvp and modify my <Input> section to run the kvp module. After some testing to determine which options work with my input, I discover that my logs have tabs separating the key value pairs. Now, nxlog.conf looks like this:

2a. Remove the extra fields

First, let’s remove all the extra fields in our logs that UEF does not expect. There are a few different ways to do this but, following the path of least resistance, I am going to use the simple delete procedure to remove all the fields from the log entries that I do not want.

I edit <Input> to include the delete, only to realize that I not only need to parse my logs as key value pairs, but I also need to write them back as key value pairs. Oops! A log format must be parsed before the individual parts can be used for processing. My existing <Input> reads them in but does not output them back out, so I need to add in some lines to also output the parsed logs. I currently have an <Extension> for the initial read of the logs called “kvp”. While this might work for my output as well, I create a new <Extension> for the output called “kvp2”. This allows me to use different parameters for KVPDelimiter, KVDelimiter, etc., if I need to.

After some testing, we add <Extension> with the following configuration:

Do you see the kvp2 module being used in the else statement? I test this and verify that indeed, I have the input being written out in kvp format. Now I can start deleting fields for real! I add in the delete ($field) procedure and test it. Here is nxlog.conf now:

2b. Rename fields

Now, we need to alter some of the key names. UEF expects “time”, not “timestamp”, and wants “source_ip” instead of “src”. After reviewing the original log, we also need to alter “realm” to “authentication_target”, and “usrName” to “account”. After reviewing the NXLog documentation, I decide to use the rename_field procedure to rename the fields as desired.

We can add these lines between the kvp->parse_kvp() and kvp2->to_kvp() lines:

2c. Add additional fields

We’ll now take care of the fields required by UEF but are not currently in our logs: version, event_type, and authentication_result. Version and event_type are literal strings, so I will define them in my config.json file using simple definitions. Authentication_result is a bit more complicated. Your logs will be different than mine, but my device has a “policy” field that we’ll use for authentication_result. If policy=allowed, then authentication_result=SUCCESS. If policy=blocked, then authentication_result=FAILURE. Otherwise, I want the whole log dropped.

I start by defining $version and $event_type and adding those fields to the logs. This section gets added to the bottom of my else statement:

I also verify that any logs where policy is not Blocked or Allowed have been dropped.

2d. Convert the “TIME” field to ISO 8601 Extended format

In my logs, Timestamp is already in ISO 8601 Extended format, so no further work is required. If your log is not, be sure to format the “time” field, as it’s required for UEF. Details on how to do this can be found at the bottom of this post.

2e. Convert to JSON

Now that we’ve carefully configured our nxlog.conf file, we need our results converted into JSON format. We suggest the to_json procedure for this—add this to the bottom of your <Input> block.

The final config.json file used here is at the bottom of this post.

I run a test and verify that my results file shows the logs in JSON format:

Yeah! My logs are now transformed, and I can move on to the final section: sending these logs to InsightIDR.

3. Send the logs to InsightIDR

Now that the logs appear to be in the proper UEF format, I will add it into InsightIDR. Earlier, we chose forwarding the logs from NXLog via syslog over UDP port 10010, and that is defined in nxlog.conf. Now I just need to log in to InsightIDR and add this event source.

Let’s ensure that our transformed logs are arriving at the collector. Verify that the NXLog service is running and forwarding logs, then click on “View Raw Log” on the event source to verify that the logs I am sending are there. Once they’re there, the last thing you need to do is verify parsing.

3a. Verify parsing

An easy way to check is to head over to Log Search and check for your new event source. If it’s not there as expected, this typically means that the logs aren’t matching our log parsing rules. In our example, if I don’t see the event source appear under Ingress Authentication, I would go back and review the format of the transformed logs, comparing them against UEF. However, after waiting for the slight delay for the logs to be processed and ordered, I can see them in Log Search.

I also see these events on the InsightIDR Ingress Map as expected.

That concludes our example transformation guide of converting a raw log into a Universal Event Format readable by InsightIDR for log search, user attribution, and detection of anomalous behavior. If you’re going through this process and would like assistance, reach out to your Customer Success Manager or Quick Start contact—we want InsightIDR to provide as much visibility into your environment as possible.

Below are a few supplemental steps for if you need to format User Accounts or convert time into ISO 8601.

Supplemental: Format user accounts

In my sample logs, my user accounts were all simple account names, such as jsmith. What do you do if your user accounts are specified in a more complex format?

I tested some additional samples and had no parsing issues with some common user account formats. Unless your user account field contains some crazy formats, you probably do not need to do anything special to parse this field.

If the user account is not an account, but a full username, such as John Smith, the logs were parsed properly:

If the user account is in the format of domain\username, the logs were collected and the authentication properly attributed. However, the account shows an extra “\” in the account field. To fix this, regular expression in nxlog.conf can extract it. However, since the attribution is correct, and the username field properly displayed, I have left the log as is.

If the user account is specified in UPN format such as user@domain.com, this is also properly parsed and attributed.

If the user is not a valid user in your organization, the logs are still parsed and placed in Log Search.

Supplemental: Convert Time to ISO 8601 Extended

My log sample happily contained logs that were already in ISO 8601 Extended format. If you are trying to get your head around this time format, check out this documentation.

However, what should you do if you are not so fortunate and your time stamp field is in a different format, or, as sometimes is the case, just missing altogether?

The answers to these questions can be complex and varied. In this introductory guide, I propose the simplest solution to this problem. (Be sure to also reference “Adjusting Timestamps” in the “NXLog User Guide” for more.)

The easiest method to deal with messy date/time formats in the source logs is to use the $EventReceivedTime field instead of the original log timestamp. This clearly has the advantage of a single source for “time”, which will then also be in a standard format that you can easily deal with in NXLog. Most of the time, $EventReceivedTime will also be the log generation time, give or take a few seconds, so this method sacrifices little or nothing in the way of accuracy. However, if the source time in your logs is quite different than $EventReceivedTime, consider a different method for parsing time. In other words, if your sending device is not sending the logs to NXLog as soon as they are generated, this method may not work for you.

Before deleting $EventReceivedTime from the parsed logs, you should set $EventTime equal to it:

$EventTime = $EventReceivedTime;

In my original log sample where $Timestamp is already in ISO 8601 Extended format, I am using $Timestamp as the date/time and renaming “Timestamp” to “time”. This is because “time” is the field that UEF expects to see. Well, you do not need $Timestamp in this case as you’re using $EventReceivedTime instead. Therefore, add a line to delete $Timestamp.

We need to format $EventTime so that is in the proper ISO 8601 extended format. If you’re familiar with the function strftime, you might wonder why I don’t use it for a simpler ISO 8601 Extended conversion. Unfortunately, if you run strftime on a Windows system, the %z option does not work. If you have no idea what I am referring to, count your blessings and keep reading. Basically, if you attempt to use the strftime function on a Windows host, you may have trouble getting the time converted from local time to UTC time.

I am going to use this line to convert $EventTime into the correct UEF format for time:

$EventTime = strftime($EventTime, '%Y-%m-%dT%H:%M:%SZ');

There is one problem with this: This method will not convert local time to UTC time. ISO 8601 Extended format is UTC time—that is, UEF expects the logs to arrive having been converted to UTC time. The best way to deal with this is to set the time zone of the host running NXLog to UTC time. That way, $EventReceivedTime is already UTC time, and no further conversion is necessary.

If this is not possible, you can do a manual conversion in NXLog by adding time to $EventTime. For example, my server running NXLog CE is in Mountain Standard Time. UTC time is six hours ahead of MST, or +0600. We can convert MST time to UTC time with this line:

$EventTime = $EventTime + (6 * 3600);

Unfortunately, this does not take Daylight Savings Time into consideration. Programmatically resolving this issue is beyond this introductory document. Therefore, I can either live with this or manually adjust nxlog.conf twice per year.

One of the goals of InsightIDR is to be that single pane of glass across your detection and response lifecycle. The core that powers this is user behavior analytics (UBA), which detects anomalous authentications and behavior and enriches your log data with user context. We’d like to announce a new way to collect log data: Universal Event Formats.

A: Any authentication activity, whether it is to access an internal service (such as LogMeIn Pro) or a third-party service (such as Oracle Identity Cloud) that stores internal proprietary data, may be represented with an Ingress Authentication Universal Event Format. InsightIDR will correlate and analyze these authentications to identify anomalous behavior such as Multiple Country Authentications and Ingress from Disabled Accounts, and to power your Ingress Locations map and investigations in Log Search.

Q: What if I have special VPN software that is older or obscure?

A: No problem! You can use the VPN Universal Event Format to send that data to InsightIDR. We will apply UBA just like we provide on natively supported event sources to give you the attribution and the details you love.

Q: What’s the difference between UEF and the Raw Data event source in InsightIDR?

A: Currently, InsightIDR supports several types of “Raw Data,” including Generic Syslog, Generic Windows Event Log, Custom Logs, and Database Audit Logs. This allows you to collect almost any type of log from any device with InsightIDR. With Raw Data, you get log search and dashboards, but they aren’t analyzed for user behavior and applied against your user baselines. When you convert the raw data to a supported UEF, you then get the detection, user attribution, and investigative context.

Q: What’s your suggested log manipulation tool?A: Any tool that will allow you to accept incoming log data, manipulate it into UEF, and forward it to InsightIDR may be used. We recommend NXLog or Logstash. NXLog is available in both a free community edition and commercial enterprise edition. The community edition will typically work fine for our purposes and may be used. The commercial version is more feature-rich and flexible, which may make it easier for you to manipulate your logs. Logstash is an open source tool for collecting, transforming, and sending logs into a “stash,” which in this case will be your InsightIDR collector.

Q: What’s all of this going to cost?A: These UEFs are available at no additional cost, so if you have a previously unsupported DHCP, VPN, Antivirus, or Ingress Authentication date source, check out our universal event sources documentation pages and give it a shot!

We will be releasing support for more sources in the future, as well as a follow-up blog next week that will map out a step-by-step plan for converting your raw logs into UEF. If you have any questions, let us know—we want to hear about your interesting use cases and ensure you are set up for success.

Start leveraging the power of user behavior analytics with a free trial of InsightIDR today.

A new integration with Azure Security Center makes it easy to deploy the Rapid7 unified Insight Agent across new and existing Azure Virtual Machines. This automated deployment enables InsightVM customers to maintain

A new integration with Azure Security Center makes it easy to deploy the Rapid7 unified Insight Agent across new and existing Azure Virtual Machines. This automated deployment enables InsightVM customers to maintain constant visibility into the assets, vulnerabilities, and risks in their Azure environments.

Additionally, our InsightIDR user behavior analytics (UBA) now support Azure Active Directory, making it possible to identify compromised users and risky behaviors across both on-premises Active Directory and your Identity and Access Management (IAM) services hosted in the Azure cloud.

On a higher level, these two new integrations help Rapid7 customers break down the silos between IT and Security teams in an effort to power SecOps at their organizations. Simplifying the deployment of important security tools while providing visibility into the modern environment is critical for collaboration across teams towards shared goals.

Let’s dig into the day-to-day value-adds of these two new integrations:

Azure Security Center Integration with InsightVM

The small footprint and versatility of the unified Insight Agent makes it the ideal solution to monitor today’s modern environment. Azure Security Center makes it simple to automatically deploy the Insight Agent to Azure Virtual Machines as they are spun up.

Traditional vulnerability assessment solutions can’t keep up with the highly dynamic nature of cloud environments. Vulnerable assets can come online and operate for extended periods of time before traditional solutions identify their risk (if they do so before the asset spins down, that is). Rapid7’s Insight Agent and InsightVM ensure assets are continually assessed, without requiring scan engines or waiting for scan windows. As a result, security professionals know before attackers do when vulnerable assets have been introduced to their environments.

In addition to configuring Azure Security Center to auto-deploy the agent onto each new Virtual Machine, the agent can all also be installed on all of your existing Virtual Machines with one click:

With the agent deployed to your existing assets (and automatically deployed on new assets), you’ll then be able to see all of your assets—from Azure, AWS, on-premises, VMware, and more—in a unified view in InsightVM.

InsightIDR is able to consistently identify compromised users by applying user behavior analytics to the data already generated by your network and security stack. For example, once InsightIDR has access to logs generated by your directory services, activity on your network will be correlated to the users and assets behind them. Combined with our included, cross-product Insight Agent, you have visibility into user behavior across endpoint, network, and cloud.

With this new integration, you can have full visibility across your environment whether you are using Active Directory on-premises or Azure Active Directory in the cloud.

For the full details for connecting Microsoft Azure Active Directory, LDAP, and relevant DHCP data into InsightIDR, please see our help documentation here.

Having the ability to detect and respond to user authentication attempts is a key feature of InsightIDR, Rapid7’s threat detection and incident response solution. Users can take this ability one step further by deploying deception technology, like honey users, which come built into the product. A honey user is a dummy user not associated with a real person within your organization. Because the honey user is not a real user, it should never be used by anyone for any valid authentication. Attackers frequently attempt to authenticate to as many user accounts as possible during the reconnaissance phase of an attack. Therefore, the idea behind the honey user is that if you see any activity on the honey account, it is an indicator of potential attacker activity. In InsightIDR, such activity generates a Honey User Authentication incident.

How to Use Honey Users in InsightIDR

1. Set Up Your Honey User Account

First, create a new user in Active Directory with a believable name and with every appearance of being a normal employee in your organization. In order to make the user more believable, you may wish to create several user accounts for the user. Note the honey user’s name so that it can be entered into InsightIDR settings. There are various strategies one can take when selecting a name. A few of note:

Company owners, board members, and other VIPs

Default credentials for various technologies

Common pentesting account names

Next, in InsightIDR, navigate to Settings --> Honey Users and enter in all of the honey users using the search bar. Most of the time, [First & Last Name] will be the appropriate search query rather than Active Directory username. Selecting the desired name(s) will mark the user as a honey user.

NOTE: The LDAP event source is how honey users are made available in the InsightIDR configuration. In order for the honey users to appear when you search for them, the collector must have made an LDAP pull since you created the honey users. In other words, you may need to wait a day or so after creating the honey users in Active Directory before you can configure them in InsightIDR.

2. Test Your Honey User Account

Once configured, any attempt to authenticate with a honey user account will generate an alert. Ensure that you are collecting the audit trail from the test system into InsightIDR for it to see the logs with the honey user attempts. For example, if you’re attempting to log onto your domain with the honey user(s) accounts, make sure that InsightIDR is already ingesting your domain controller security logs.

InsightIDR Honey Users: Deception at Its Best

If you’ve been interested in deploying deception traps such as honey users throughout your organization, InsightIDR can help. The included deception tech is easy to setup and manage, and event alerts come back instantaneously. If you’re a current customer, log into your InsightIDR account and give honey users and other deception traps a try. Then, comment below to tell us the most interesting alert you’ve found.

Not yet an InsightIDR user?

If you’re still reeling from a poor experience with SIEM, InsightIDR has abstracted out the biggest, common pain points we hear from strained security teams. No more buying and managing hardware, writing and tuning detection rules, and navigating a UX that brings more nostalgia than answers. Best of all, InsightIDR is a cinch to deploy—all that’s required from you is a few clicks of a button to be off the ground running.

Start a free trial of InsightIDR today

You’ve hired the best of the best and put up the right defenses, but one thing keeps slipping in the door: phishing emails. Part of doing business today, unfortunately, is dealing with phishing attacks. Few organizations are immune to phishing anymore; it’s on every security team’s mind

You’ve hired the best of the best and put up the right defenses, but one thing keeps slipping in the door: phishing emails. Part of doing business today, unfortunately, is dealing with phishing attacks. Few organizations are immune to phishing anymore; it’s on every security team’s mind and has become the number one threat to organizations. 80 percent of confirmed breaches to date have been attributed to weak, stolen, or default credentials.

Phishing emails attack your users in different ways, with social engineering, malware, and drive-by downloads being the most common methods. Your defense ultimately relies on how effective your users are at detecting phishing emails, links, and attachments. Attackers know this, which is why they’ve become quite savvy at duping users with believable pretext (e.g. sending an email from a fake CEO’s account) or a false sense of urgency (e.g. requesting a “mission-critical” expense be paid). If even one user’s account is compromised, an attacker can then send a legitimate corporate email to anyone in the company, which can magnify clicks and the chance of a real breach.

But phishing is an issue you can combat. Since your employees are your first line of defense against phishing, an effective strategy needs to both arm users and have countermeasures in place should that fail. In this post, we’ll explain the components of an effective phishing program:

Looking for the right tools and tactics to combat phishing attacks?

Three components of an effective phishing program

Phishing attacks can spread quickly, so to protect against them, you need more than just security training or two-factor authentication. You should have the ability to:

1. Proactively block malicious emails—especially the obvious ones.

While it’s impossible to preemptively block all phishing attempts, it still makes sense to try and prevent the simple stuff. Secure email gateways, for example, can reduce your exposure to commoditized, opportunistic campaigns, and can also come with features to help identify data leakage and support compliance.

In a similar vein, there should be an easy way for your users to take action on a suspected phishing email. This comes in the form of reporting. If your users have an easy, seamless way to report a suspicious email or link, they’ll be much more likely to do so. Put a complicated reporting process in front of them, however, and the chances that they will do so will drop to near-zero. (Ignorance is bliss, remember?) Rapid7’s InsightPhishing can empower your users to report suspected phishes with a one-click report button embedded directly in their email clients.

2. Provide user awareness, training, and simulations.

When you combine phishing awareness and training with an easy way to act on that knowledge, you empower users to become your best security advocates. Phishing is a human problem, so to make your people more resilient, you need to have phishing awareness training in place. This can be done through a two-step process of education, then simulated testing to measure who clicks on the mock phishing emails. Campaigns should be sent out soon after training occurs and then periodically throughout the year—this keeps users on their toes and gives you enough data points to measure long-term training effectiveness.

Over time, you’ll be able to assess how your users are faring at phishing identification and reporting. If you send 100 simulated emails, are you getting 25, 50, and 75 percent more people reporting over time? Ideally you should be, but if not, that may be a sign you need to revisit your training program. With metrics in hand, you can demonstrate training progress to leadership, which can help validate training efforts and highlight improvement across the organization.

3. Detect users compromised by a phishing attack.

It's inevitable that at some point, one of your users will fall for a phish… if they haven’t already. In order to stop an attacker from accessing sensitive data, you need to know who they duped and what happened. When investigating an incident, it’s essential to know each step an attacker took between user accounts, endpoints, and your network to ensure that remediation is thorough and avoid rebreach.

However, even with automated alerting, phishing investigations take too much manual effort, even if they’re just to confirm that a suspected phishing email is benign. That’s where SIEM technology comes in, giving you the network-wide visibility required to identify compromises across your environment, right down to the exact users who were affected. Together, SIEM and phishing technology can provide you with a defense-in-depth approach.

Having the capability to identify compromised user accounts and credentials is the biggest challenge for nearly every company today. That’s why fusing together user reporting with detection capabilities can give you complete end-to-end visibility.

Employee reporting is the most common method for discovering a breach. By using a tool like InsightPhishing, users can report any suspected phishing emails with one click, directly from their inbox. You’re provided the email’s metadata and body content, which InsightPhishing automatically analyzes for suspicious indicators.

Any of these indicators can be added to a custom phishing indicator database for your organization so that your team can leverage the insights from past investigations. The more you train InsightPhishing, the better it gets. If you identify a malicious phish, you can use a SIEM like Rapid7's InsightIDR to scope out affected users, gain additional context, and improve your defenses. InsightIDR applies two layers of analytics to your data to find attacks: user behavior analytics (UBA) to find users exhibiting behaviors indicative of compromise, and attacker behavior analytics (ABA) to hunt the underlying techniques attackers use time and time again.

Any malicious URLs can be added to InsightIDR, which will then automatically match against future DNS, firewall, web proxy, and other user data. If a user accesses that URL in the future, InsightIDR can automatically alert you or track the action as a notable user behavior. After all, the key to preventing future compromise lies in translating your investigation findings into threat intelligence, and feeding that back into your proactive defenses.

During an investigation of a phishing attack, you can also search through the logs within InsightIDR to see other users who accessed the URL at any point in time. If an attacker sends a phishing email to the entire organization, the log data will tell you the exact users who clicked on it so you can take appropriate action (e.g. change a password, deprovision an account).

Additionally, you can correlate the phish with other alerts generated by InsightIDR around the same time (e.g. new malware installed, unusual user account activity, new processes running, vulnerabilities exploited, etc.) to better inform remediation. This level of intel allows you to patch a vulnerability, shut down a process, or lock out an account to keep confidential assets or data safe.

Together, InsightPhishing and InsightIDR are a complementary duo. Findings from one layer inform other layers in your defenses, creating true defense-in-depth.

Defending the modern enterprise is hard work. Between the need for round-the-clock coverage, technology to provide full visibility across the expanding enterprise, a highly skilled and experienced team, and the business level pressure to “prevent a breach,” there is little wonder why we are losing more than we are winning. As a result, organizations that understand the need for threat detection and response, but lack the capital or available headcount to do it themselves, are turning to Managed Detection and Response (MDR) service providers such as Rapid7.

MDR offers people and technology as a bundled package, with the expectation that the provider extend beyond traditional Managed Security Services to directly relieve major pain points: (1) detect compromise as it happens, (2) provide response expertise for any escalated incidents, and (3) help customers identify risk so they can proactively improve security posture. With such a promise, it’s essential to choose a vendor that can actually deliver.

Each provider has a different strategy or scope of data collection, whether it be endpoint detection and response (EDR), network detection, log correlation, or a “proprietary black-box AI.” Further, each provider has a different engagement model, different fine print in the level of services received, and ultimately a different customer experience. While the various bells and whistles can help improve your overall experience, the worst possible outcome is if that provider doesn’t identify a threat in time and the business suffers financial or reputation impact.

What Threats Can You Detect With Your Data?

If you’re in the market for a managed threat detection and response provider, how can you evaluate a vendor’s ability to identify threats? The most basic factor is visibility. Smart people can’t find what they can’t see. If your MDR provider is focused only on logs, or the network, or the endpoint, they may lack visibility to protect what matters to you the most. First, consider the following:

Where does my most valuable data reside?

What would cause the worst reputation impact to my business?

Then, ask this of a potential provider:

Will you collect logs from my most critical systems/applications? (This includes systems and applications outside of the traditional network, e.g. cloud services)

Can you analyze endpoint events from my critical systems?

Are you able to monitor if an attacker impersonates as one of my employees?

Can you centralize and analyze events from my existing security technology?

Can you demonstrate how you detect malicious tools, tactics, and procedures across the entire attack lifecycle?

Test drive InsightIDR today with a free 30-day trial

Data Analytics, Investigation, and Response

Once you’re satisfied with the scope of your provider’s data collection, validate their approach to detecting threats. At its most basic, there are three types of threats:

Common Threats
These threats don’t specifically target your organization, but take advantage of common weaknesses in technology environments (which can include people), and use automation to spread. The motivations are typically to gain a small amount of money from a large group of affected systems and people. Some examples include crypto-based threats (where data is held hostage for ransom), viruses that steal banking or social credentials, or botnets that are rented to perpetrate other attacks like distributed denial of service (DDoS) or bitcoin mining. Generally speaking, these types of threats can be prevented using technology such as A/V, IPS, web proxies, and other specialized tools that leverage threat intelligence and trained machine learning.

Targeted Threats
These threats target a specific person, organization, government, or industry with very specific missions. These threat actors are usually professionals who perform these acts for financial compensation, political agenda advancement, or nationalism. These are the threats you hear about in the news. These are the threats that not only cause severe financial impact to your customers and the business, but can also cause loss of competitive advantage in your industry and reputation damage. Generally speaking, these types of threats need to be detected by a combination of skilled and experienced people coupled with the right technology.

Insider Threats
These threats are the hardest to manage. Business insiders use their privileged access as a trusted employee to steal and sell data to competitors or other interested parties. Generally speaking, identifying insider threats requires a strong partnership between human resources and the security department, coupled with highly specialized behavior analysis tools and people. Because insiders already have access, but aren’t always stealing data, looking for unusual behaviors in the data access and data exfiltration steps of the attack chain is critical.

With a basic understanding of threat types, you can ask a provider how they will protect your various systems, applications, data, and users from each type. For Rapid7’s managed threat detection and response services, we apply four threat detection methodologies against data from your network, endpoints, users, logs, cloud applications and services, and existing security technology:

Traditional threat intelligence: Using this methodology, IP addresses, domains, and file hashes are matched against Rapid7’s threat intelligence curated by Rebekah Brown and the Threat Intelligence Team, Mike Scutt and the MDR SOC analysts, Tim Stiller and the Incident Response Team, and the various projects from the Rapid7 Labs team including Project Sonar and Project Heisenberg. Additionally, we are proud, good standing members and on the board of directors for the Cyber Threat Alliance and have established public, commercial, and private partnerships for intelligence sharing. This enables Rapid7 MDR to detect and validate common threats across the traditional and modern enterprise network.

Attacker Behavior Analytics (ABA): Complementing MDR’s ability to detect known tools and threats, our analysts and researchers also strive to detect behaviors associated with attacks and attackers. Our premise is that the tools change often, but the behaviors largely remain the same. During threat validation and incident response, analysts research the events and evidence leading up to the threat or discovered breach. Our SOC analysts and threat intel teams then create detections that look for specific events and evidence that match these specific behaviors. This partnership between our SOCs and InsightIDR engineering teams has led to a data collection and analysis engine that is specifically built for attacker behavior detection. Using this methodology, Rapid7 MDR detects 0-day common threats, targeted threats in all stages of the attack lifecycle, and certain insider threats across the traditional and modern network enterprise network. Learn more about Rapid7’s attacker behavior analytics.

User Behavior Analytics (UBA): According to the 2018 Verizon DBIR, the use of stolen credentials and phishing are the first and third most common actions behind confirmed breaches (RAM scrapers came in number two). Understanding typical user behavior and asset relationships is critical when detecting threats in the reconnaissance, lateral movement, and persistence phases of the attack lifecycle. Rapid7’s InsightIDR has the leading user behavior analytics engine to establish a healthy baseline and alert on signs indicative of compromise. Using this methodology, Rapid7 MDR detects destructive common malware (such as worms and crypto-malware), targeted attacks in specific phases of the attack lifecycle, and most insider threats.

Monthly threat hunts: This is the safety net methodology. On a monthly basis, MDR SOC analysts perform a human driven, in-depth threat hunt across forensics artifacts generated from the traditional and modern enterprise network. The visibility provided by the data collection abilities of InsightIDR, data analysis techniques such as frequency analysis, and a purpose-built threat hunting user interface give the MDR SOC analysts the ability to hunt for threats that may have evaded our other three detection methodologies. Using this methodology, Rapid7 MDR detects 0-day common threats, targeted threats at all stages of the attack lifecycle, and identifies potential avenues for insider threats to steal and exfiltrate your data.

By getting 100% visibility across your network with our purpose built SIEM, deep visibility into your endpoints, a focus on the user as a key element behind successful breaches, and a layered, overlapping application of multiple threat detection methodologies, Rapid7 Managed Threat Detection and Response Services provides your team with effective, partnership-driven detection and response capabilities.

Learn More About Rapid7’s Managed Threat Detection and Response Services

Whether you call them alerts, alarms, offenses, or incidents, they’re all worthless without supporting context. A failed login attempt may be completely benign ... unless it happened from an anomalous asset or from a suspicious location. Escalation of a user’s privileges could be due to a special project or job promotion … or because that user’s account was compromised. Many security monitoring tools today generate false positive alerts because they’re only able to report on activity, without taking into account the context in which the activity occurred.

In this post, we’ll explore how security analytics can correlate data across your network and pinpoint real security events, so you can stop wasting precious time on frustrating false positives.

Gather context as data streams in

Can you provide a hard number on how many servers and machines are connected to your corporate network? With your network, preventative measures, and monitoring stack all providing a siloed look at events, it can be overwhelming to say the least. Most SIEM tools or log management tools don’t apply user behavior analytics to these data streams to correlate account behaviors to the entities behind them. This is a problem many security pros run into today: trying to determine whether something is malicious or a misconfiguration can often lead you down a deep rabbit role.

Luckily, there is a way to make your data work for you. Imagine a world in which data streams in from across the network and is then automatically correlated with the users and assets involved. (To see this world in action, check out our InsightIDR interactive product tour.) Instead of one-off events such as a failed login, you could see a timeline of activity across users, assets, and the network to understand what is really happening (like repetitive logins from Russia onto compromised user accounts) so you don’t arrive at an incorrect conclusion.

Most security pros have had their fair share of barking up the wrong tree because of poorly reported data, but once they see activity in context, it not only reduces overhead of parsing through false alerts, but it makes their investigations and response far more fruitful.

Automatic visibility and separation of admin & service accounts from user accounts

Different account types warrant different permission levels and behaviors, and they should be evaluated in this way. What happens on a service account vs. a machine-only service vs. a user account are very different and should not have the same baselines.

InsightIDR automatically tracks admin actions across network, endpoint, and cloud services. If an asset on your network starts taking first time admin actions, that will be surfaced as notable behavior.

This easy account breakdown helps you ask the right questions and enforce the principle of least privilege:

Should this user have these permission levels?

This user is now at a new department; do their privileges still make sense?

This user has left the company; how can we remove them from all accounts ASAP?

With easy access to any user profile and their permissions, you can take action quickly, mitigating the impact of a breach, reducing insider threats, and maintaining full visibility into your network.

Know your most risky users

Every office has them: the click-happy employee particularly vulnerable to phishing attacks, despite repeated security awareness training. Or the traveling employee who is notorious for always operating off the corporate VPN. Then there’s the CFO who has the keys to the kingdom that every attacker wants. If training and preventative measures aren’t enough to prevent these users from getting attacked, you need a way to quickly detect if and when an issue arises.

InsightIDR automatically highlights risky users for you based on past behavior, such as logging in from certain locations, setting poor passwords, and having generated past alerts, such as clicking on a phishing link. It not only shows you who these users are, but you can drill down to see their authentication locations, VPN usage, asset vulns, cloud services, running endpoint processes, and more.

This gives you a better understanding of their behaviors—good and bad—and allows you to retrace user activity with just one search.

Detect malicious behavior on the endpoint

Knowing what is being run on your assets is just as important as knowing risky users in your organization. InsightIDR natively collects endpoint data with our cross-product Insight Agent, which gives you deep asset visibility and real-time detection. Between maintaining the Metasploit project, our 24/7 Security Operations Centers, and thousands of pen test and incident response engagements, we are constantly identifying and investigating new attacker behaviors. Those investigations help us create Attacker Behavior Analytics—tuned behavioral detections that automatically match against your dataset and come with supporting threat intelligence.

Context without the digging

If you want to reliably detect attacks, you need comprehensive data collection backed by ever-evolving security analytics. If you’ve felt the pain of manually parsing log files, or a perplexing, vague alert, you know how quickly it can bring an investigation to a screeching halt. Wouldn’t it be nice to have all this work done for you so that all you have to do is view an alert filled with user, asset, and even attacker context, then jump into action?

You can see this process in more detail with our InsightIDR Interactive Product Tour. If you’re ready to ditch the false positives and deploy in a matter of hours, try our free, guided InsightIDR trial today.

Ditch the false positives, deploy in hours. Try InsightIDR Today.

One of the most important metrics in infosec is “attacker dwell time”—how long does it take to detect and remediate an intrusion? While each year brings improvement, the latest research reveals an average of 95 days, still over three months.

Attackers continue to hide in plain sight by impersonating company users, forcing security teams to overcome two challenges. First, companies must centralize all security-related events and employee behavior on the network. Then, security teams must analyze that mountain of data to expose signs of compromise.

Security information and event management (SIEM) tools are great for data centralization, but have struggled with the analytics layer: making sense of the data mountain. This has led to the explosion of user behavior analytics (UBA), which impacts attacker dwell time via intelligent detections and faster investigations. By first building a baseline of normal user behavior across the network, and then matching new actions against a combination of machine learning and statistical algorithms, UBA exposes threats without relying on signatures or threat intelligence.

If you’re investing in user monitoring as a facet of your program, here are suggestions: two tech, two human—to maximize your impact.

Comprehensively Collect Data
For a complete picture of user behavior, you need visibility both on and off the corporate network. Traveling employees, remote workers, and cloud services are under your purview, meaning that your user behavior analytics needs to cover that, too. This can include analyzing endpoint authentications and behavior and matching it against user activity from Office 365 or Google Apps. If you’re only collecting logs from headquarters or critical assets, you’ll have glaring blind spots and fewer opportunities to identify an ongoing attack.

Devote Cycles to the Technology
Today’s leading SIEM technologies, like Rapid7’s InsightIDR come with user behavior analytics, making it no longer just for investigations and compliance—it can identify real-time risk across users and assets. To get the most out of your user monitoring, you need two types of skillsets: data management and incident response. The right data feeds must be properly centralized, and your team needs to take action on the output.

This is a challenge when the entire industry is clamoring for talent—in response, Managed Detection and Response (MDR) services are quickly rising in popularity. Should you want to tackle incident response in-house, consider a SaaS SIEM or co-managed model. Otherwise, consider a MDR service that both brings security expertise and can help you check the compliance box for log management.

Be Transparent with the Company
Security teams get the bad rap as the “team that says no”. For employees, rolling out user monitoring can feel like an Orwellian mask layered over shadowy operations. Flipping that on its head, UBA can be a great opportunity to share the threat landscape we live in today.

All employees must be vigilant about their credentials: "81% of confirmed breaches involve the misuse of stolen or weak credentials." If an attacker successfully phishes an Office 365 login, they can view that employee’s mailbox, send super-credible phishing attacks, and try for a VPN certificate for internal network access—all without malware.

Sharing how user behavior analytics detects the use of stolen or misused credentials—and will only be used to detect compromise—can help everyone see through the same lens. Your employees should always be the most reliable sources of truth.

Share Successes
Security savants don’t get a lot of the company spotlight. Similar to IT, security often comes to mind only when something is amiss. User behavior analytics can shift that dynamic, as it gives your team the opportunity to improve security posture, and help with employee awareness. This includes identifying risk across credentials and configurations, which can range from unknown admins and running processes to non-expiring passwords. If you’re able to identify and coordinate with IT on fixes, it’s a great story—share both progress and how an attacker could have taken next steps.

Perhaps the best benefit of user behavior analytics is that it can give you room to breathe. Instead of being plagued by endless alerts and scattered investigations, you’ll have the chance to execute a long-term security strategy. We walked through a few suggestions—if you’re able to build mutual trust with employees and have teams see through the same lens, you won’t just be monitoring users; you’ll be understanding normal. And security isn’t about the obviously bad. It’s about the barely abnormal.

If you’re currently tackling an active SIEM project, it’s not easy to dig through libraries of product briefs and outlandish marketing claims. You can turn to trusted peers, but that’s challenging in a world where most leaders aren’t satisfied with their SIEM, even after generous amounts of professional services and third-party management. Luckily, Gartner is no stranger to putting vendors to the test, especially for SIEM, where since 2005 they’ve released a yearly quadrant that ranks the top SIEM tools. A fun blast from the past...

-Gartner Magic Quadrant, Marc Nicolett, Amrit T. Williams, 2005

Well, change is in the air. Humanity has come a long way in analyzing big data—yes, even security data too. Today’s leading SIEMs come standard with pre-built detections that expose both attacks and misconfigurations, without continuous query writing and tuning. SaaS architecture empowers IT Operations and Infosec of all sizes to get up and running in hours, without the tedious, costly side of centralized log management. Being recognized as a "Visionary" in this critical space reaffirms our continued investment in understanding both how attackers operate and the needs of overwhelmed security teams.

Gartner kicks off our description with:
“Rapid7’s SIEM offering InsightIDR is delivered as-a-service via the Rapid7 Insight platform. The solution consists of the InsightIDR service, as well as EDR agents and honeypots for deception activities (both included, but optional to use). The solution provides core SIEM features like log collection and management, threat detection rules and correlations, dashboards, case management and workflow and reporting...”

Today’s SIEM tools aren’t just for compliance and post-breach investigations. Advanced analytics, such as user behavior analytics, are now core to SIEM to help teams find the needles in their ever-growing data stacks. That means in order for project success, the right data sources need to be connected:

Today’s SIEM tools aren’t just for compliance and post-breach investigations. Advanced analytics, such as user behavior analytics, are now core to SIEM to help teams find the needles in their ever-growing data stacks. That means in order for project success, the right data sources need to be connected: “If a log falls in a forest and no parser hears it, the SIEM hath no sound.”

We’ve included endpoint visibility in InsightIDR since the beginning—it’s the key to detecting attacks involving remote workers or lateral movement early in the attack chain. InsightIDR now supports the Insight Agent for Mac, meaning you can deploy a single agent on any Windows, Mac, or Linux system for it to feed real-time data across our product portfolio.

What’s the Benefit of the Insight Agent?

Installing and maintaining too many agents adds to your existing workload. The Insight Agent is therefore designed to universally collect data for all Insight solutions and automatically update—although if needed, it can adapt to your patching cycles. Most importantly, we constantly ensure that the Insight Agent is not disruptive to the end user. It has a tiny (50 MB) disk footprint and transmits a daily average of 1-2 MB. The computational analytics take place in InsightIDR, and not the endpoint itself.

Deploying the Insight Agent gives you three benefits in InsightIDR:

System-level visibility

Real-time, deceptive detections

Endpoint investigation and hunting

Visibility

Running processes. Every process on your endpoints is matched against multiple threat intelligence sources, ranging from intel from our Managed Detection and Response (MDR) Services team to a composite of anti-virus signatures. Across all your assets, unique and rare processes are surfaced to reveal Shadow IT (programs installed without IT oversight) and “unknown-unknowns.”

Anomalous admin action. Across InsightIDR, administrative activity is correlated to identify attacks on privileged accounts. Suspicious local-level activity, such as privilege escalation or log deletion, results in alerts to help you react earlier in the attack chain.

Password and protocol attacks (e.g. MimiKatz, Responder): Once the Insight Agent is deployed, honey credentials are injected onto the endpoint—fake password hashes that don’t grant access to any system. If an attacker tries these hashes elsewhere, you’ll receive an alert. An endpoint presence also allows us to creatively detect Responder, a very popular tool that steals user credentials by spoofing as a trusted network service.

Endpoint Hunting

On-Demand Investigation. Every alert generated by InsightIDR comes with a visual timeline that highlights the involved users, assets, and their behaviors. The Insight Agent allows you to go a layer deeper by running on-demand jobs that can pull additional context such as registry keys, scheduled tasks, prefetch data, and more.

And that’s it! If you’re interested in how we detect across the attack chain, check out our Pre-built Detections brief. If you don’t have InsightIDR, start “knowing what you don’t know” with our free, fully-featured trial. Big Data Borat says data science is “99% preparation, 1% misinterpretation”; with InsightIDR, you’ll glide through the prep and find value in hours.

Security Information and Event Management (better known as SIEM): these five words are defined hundreds of ways by thousands of people. And since 2005, industry analysts have now effectively debated it through two pivots of what exactly answers the primary problems of SIEM. There’s a lot to be learned by watching a market like SIEM adapt as technology evolves, both for the attackers and the analysis.

SIEM (or SEIM): The Monitoring Wild West of the early aughts

When a centralized, single pane of glass was first desired to answer security questions, such as “What is happening on our network?” and “Is that activity bad?” there were very quickly a few dozen software (and hardware—this was almost fifteen years ago) vendors racing to be the solution these forward-thinking security professionals implemented. You would have to be crazy not to make your play to be the central backbone of an organization’s security operations. At this point, it didn’t matter that most security budgets were only opened to avoid regulatory fines. If compliance was the way to justify the security team’s spend, the smart vendors found ways to both influence regulation to include log analysis and event review, and subsequently make compliance reporting table stakes for buyers.

No one had done this before, so there was room for a dozen different approaches to find early success:

“Log collection and analysis is key!” said some vendors.“No way! It’s the SOC console that matters above all else!”“You’re all fools! Integration with vulnerability management. That’s the most important capability.”

There wasn’t even agreement on whether the ‘I’ was before ‘E’, even though it didn’t follow a ‘C’ [nor pronounced ‘a’ nor...]! Evaluating these solutions was extremely difficult, but by 2010 or so, three vendors (one of which had already been acquired, twice) were starting to become the shortlist for any Fortune 100 organization: ArcSight, Network Intelligence, and Q1 Labs. This certainly didn’t mean all other solutions had failed, but there was a clear cool kids’ club, and they paid little attention to the outsiders.

But it was when these three winners of the original SIEM Hunger Games were swallowed up by major worldwide conglomerates that three very different vendors fought their way into the fray. One of which, NitroSecurity, was absorbed less than a year after it became a serious competitor. This was the first significant pivot of the SIEM market.

Some security teams dug in and invested more in what they’d already implemented (head nod, sunk cost fallacy), but others knew that any alternative had to be better. Getting best-of-breed in search from a tool IT operations already had in place was definitely better than a box which, theoretically, contained all the data you needed, but was useless during actual alert triage and incident analysis.

This pivot silently whittled six leaders down to five, and people made one of three decisions about SIEM tools:

This is home base. Forever. Treat the SIEM like a regularly-updated database and build what is needed on top.

We can change, but only so much. The only options are to switch from the Legacy 3 to the Hipster 2.

‘SIEM’ is a four-letter word (not acronym). SIEM just won’t work for my organization. It is too complicated and expensive.

Compliance reporting was very clearly still a base requirement for the SIEM space, yet there were a three festering pains felt across all SIEM 1.0 and 2.0 users: the ridiculous effort to determine whether an event is normal, an untenable number of vague rule-based alerts, and the cost of ownership under Moore’s Law.

UBA (or UEBA) is dead. Long live UBA!

Despite these highly-successful factions having survived the 2012 pivot, an increasing number of security professionals were voicing their dissatisfaction with their monitoring options. A group of start-ups and, well, Rapid7, homed in on the the first two pains in legacy SIEM above, just as the primary tool in the attackers’ arsenal became the theft, and use, of legitimate credentials for widespread compromise without triggering any alarms. The leading five SIEM vendors simply couldn’t provide effective user monitoring or advanced analytics to detect and investigate these attacks. Alerts were far too binary to help you know how to look at this authentication instead of those other fifty authentications.

The new Monitoring Wild West, almost exactly a decade later, involved more than a dozen user behavior analytics (UBA) vendors who all used very different statistical anomaly detection and peer group profiling on the datasets they best understood. Whether the solutions had their own data collection or not, it was clear that they were a complement to the behemoth SIEM solutions which had saturated the market. Such a business model of always complementing the hub was doomed to a short life, so through a combination of UBA vendor acquisitions that looked good on paper, check-the-box UBA add-ons, and SIEM enhancements to UBA solutions, two markets converged in early 2017, faster than any industry analyst or CEO dared predict. But there was still one remaining pain among SIEM buyers.

Pivot 2: SaaS SIEM—No longer an oxymoron

Software-as-a-service: to some of us children of the eighties, this term is neither scary nor a caution. However, many organizations resisted the idea of their data on someone else’s servers, even if they reside in a bomb-proof fortress with armed guards and fewer labels than Hangar 51. Until they stopped resisting…en masse. And it reached a point (in 2017) when even SIEM was redefined to include SaaS (and its cooler twin, cloud) as a delivery model.

As a recovering hardware engineer, I realize that Moore’s Law has been purposely misconstrued to serve many a storyteller’s plot device. The pace of complexity growth for integrated circuits has slowed in recent years, but the need to upgrade hardware appliances, continually reconfigure multi-site indexers, and expand storage clusters is a real burden. If you want to deploy exponentially faster and lessen your maintenance headache, cloud-native solutions finally make value attainable for the vast majority of teams.

After a five-year journey, Rapid7’s mission to create a security monitoring world in which today’s attacks can be detected and handled is getting its due validation. The entire definition of SIEM is shifting (a second time) with us, and we want you to see why. Sign up for a free trial of InsightIDR—we believe it’s the SIEM solution you’ve been waiting for.

In this post, I’d like to share what to expect if you take InsightIDR out for a test drive.

How Can InsightIDR Help Your Team?

Unify data. Nearly every SIEM helps you with centralized log management, speeding up incident investigations and checking the box for compliance. With our cloud-architecture and included Insight Agent, our security analytics go a step further to give you coverage for common visibility gaps, endpoints and cloud services, without having to devote time to big data management overhead.

Quick win: Deploy the included Honeypot in your environment to detect network scans.

Prioritize risk. Legacy SIEM is great when you know what you’re looking for; it’s less helpful in showing you where to start. Once set-up (within hours!), InsightIDR identifies misconfigurations and risk, ranging from weak password policy to lateral movement. You’ll not only meet compliance and detect attacks, but proactively improve the company’s security posture—with the dashboards to prove it.

OK, I’ll give InsightIDR a whirl. Walk me through this.

In-product messaging will guide you through setup and the user interface.

This sounds great. Anything I should keep in mind?

The trial is for 30 days, so commit time for deployment! We have the easiest to deploy SIEM available today, but it requires foundational data sources to be effective. Don’t worry—you’ll be guided every step of the way.

The more you connect to InsightIDR, the better the context it provides. Our list of supported integrations. We’ve deliberately priced InsightIDR on asset count, not data volume, so you aren’t forced to weigh the detection value of one data set against another.

If you’re not ready to start a deployment, check out our Interactive Product Tour to see how customers are using InsightIDR today.

]]>

Sitting down with your data lake and asking it questions has never been easy. In the infosec world, there are additional layers of complexity. Users are bouncing between assets, services, and geographical locations, with each monitoring silo producing its own log files and slivers of the complete picture.

Sitting down with your data lake and asking it questions has never been easy. In the infosec world, there are additional layers of complexity. Users are bouncing between assets, services, and geographical locations, with each monitoring silo producing its own log files and slivers of the complete picture.

From a human perspective, distilling this data requires two unique skillsets:

Incident Response: Is this anomalous activity a false positive, a misconfiguration, or true malicious behavior?

Data Manipulation: What search query should I construct to get what I need? Do I need to build a custom rule for this, or report on this statistic?

We’ve built InsightIDR with the goal of reducing friction and complexity on both of these fronts. On the incident response side, you’re armed with a dossier of user behavior analytics across network, endpoint, and cloud services to make faster, informed decisions. You can now enjoy Visual Search, which aims to lower the level of complexity associated with writing queries and making sense of your wealth of log data.

Visual Search was first released in InsightOps, our solution for IT infrastructure monitoring and troubleshooting. It’s had a great reception, and we’re proud that it’s now a shared service also available in InsightIDR. Visual Search identifies anomalies, allows for flexible drill-downs, and helps you build queries without using the Log Entries Query Language (LEQL).

Your First Visual Search

In InsightIDR, start by heading to Log Search. You’ll notice that we’ve refreshed the look and feel—we’re continuously improving the speed and responsiveness of the search technology.

A breakdown of the updated interface:

Activate Visual Search by selecting it under the Mode dropdown. At this point, three cards will auto-populate, proactively identifying anomalies from your data. For each data set, we brainstormed with security teams, including our own, to map out interesting starter queries.

You can click on the gear to edit, copy, or remove the card. This is the same architecture as the cards in Dashboards, so the suggested queries can improve your LEQL skills and help you see your data differently.

From here, you can click into any of the bars or data points on the card to drill further. For example, for the “Group by destination_port” card, we can click on the 5666 bar. It automatically performs the search query, where(destination_port=5666).

Visual Search is a great first step in highlighting “where to look”. As each data set is enriched with user and location data, this feature really highlights the user behavior analytics core in InsightIDR. These cards wouldn’t be possible to populate from the raw log data alone. By proactively identifying anomalies tailored to each data set, and guiding you towards LEQL search strings, you can find answers while gaining skill along the way.